diff --git a/attached_assets/Pasted--Enhanced-ThrillWiki-Header-Icons-Sizing-Prompt-xml-instructions-Increase-the-size-of-the-the-1758661913007_1758661913007.txt b/attached_assets/Pasted--Enhanced-ThrillWiki-Header-Icons-Sizing-Prompt-xml-instructions-Increase-the-size-of-the-the-1758661913007_1758661913007.txt deleted file mode 100644 index 1f253fc2..00000000 --- a/attached_assets/Pasted--Enhanced-ThrillWiki-Header-Icons-Sizing-Prompt-xml-instructions-Increase-the-size-of-the-the-1758661913007_1758661913007.txt +++ /dev/null @@ -1,119 +0,0 @@ -# Enhanced ThrillWiki Header Icons Sizing Prompt - -```xml - -Increase the size of the theme toggle icon and user profile icon in ThrillWiki's header navigation. The icons should be more prominent and touch-friendly while maintaining visual harmony with the existing Django Cotton header component design. Update the CSS classes and ensure proper scaling across different screen sizes using ThrillWiki's responsive design patterns. - - - -ThrillWiki uses Django Cotton templating for the header component, likely located in a `header.html` template or Cotton component. The header contains navigation elements, theme toggle functionality (probably using AlpineJS for state management), and user authentication status indicators. The current icon sizing may be using utility classes or custom CSS within the Django project structure. - -Technologies involved: -- Django Cotton for templating -- AlpineJS for theme toggle interactivity -- CSS/Tailwind for styling and responsive design -- Responsive design patterns for mobile usability - - - -Current header structure likely resembles: -```html - -
- -
-``` - -Enhanced version should increase to: -```html - - - -
- -
-``` -
- - -w-4 h-4 (16px) -w-6 h-6 (24px) mobile, w-7 h-7 (28px) desktop -header.html, base.html, or dedicated Cotton component -Utility classes with responsive modifiers -AlpineJS theme toggle, Django user authentication - - - -The header icons need to be enlarged while considering: -1. Touch accessibility (minimum 44px touch targets) -2. Visual balance with other header elements -3. Responsive behavior across devices -4. Consistency with ThrillWiki's design system -5. Proper spacing to avoid crowding -6. Potential impact on mobile header layout - -Development approach should: -- Locate the header template/component -- Identify current icon sizing classes -- Update with responsive sizing utilities -- Test across breakpoints -- Ensure touch targets meet accessibility standards - - - -**Phase 1: Locate & Analyze** -- Find header template in Django Cotton components -- Identify current icon classes and sizing -- Document existing responsive behavior - -**Phase 2: Update Sizing** -- Replace icon size classes with larger variants -- Add responsive modifiers for different screen sizes -- Maintain proper spacing and alignment - -**Phase 3: Test & Refine** -- Test header layout on mobile, tablet, desktop -- Verify theme toggle functionality still works -- Check user menu interactions -- Ensure accessibility compliance (touch targets) - -**Phase 4: Optimize** -- Adjust spacing if needed for visual balance -- Confirm consistency with ThrillWiki design patterns -- Test with different user states (logged in/out) - - - -Common issues to watch for: -- Icons becoming too large and breaking header layout -- Responsive breakpoints causing icon jumping -- AlpineJS theme toggle losing functionality after DOM changes -- User menu positioning issues with larger icons -- Touch target overlapping with adjacent elements - -Django/HTMX considerations: -- Ensure icon changes don't break HTMX partial updates -- Verify Django Cotton component inheritance -- Check if icons are SVGs, icon fonts, or images - - - -1. **Visual Testing**: Check header appearance across screen sizes -2. **Functional Testing**: Verify theme toggle and user menu still work -3. **Accessibility Testing**: Confirm touch targets meet 44px minimum -4. **Cross-browser Testing**: Ensure consistent rendering -5. **Mobile Testing**: Test on actual mobile devices for usability - -``` \ No newline at end of file diff --git a/attached_assets/Pasted--Enhanced-ThrillWiki-Park-Listing-Page-Optimized-Prompt-xml-instructions-Create-an-improved-1758662639774_1758662639774.txt b/attached_assets/Pasted--Enhanced-ThrillWiki-Park-Listing-Page-Optimized-Prompt-xml-instructions-Create-an-improved-1758662639774_1758662639774.txt deleted file mode 100644 index 9c437e53..00000000 --- a/attached_assets/Pasted--Enhanced-ThrillWiki-Park-Listing-Page-Optimized-Prompt-xml-instructions-Create-an-improved-1758662639774_1758662639774.txt +++ /dev/null @@ -1,147 +0,0 @@ -# Enhanced ThrillWiki Park Listing Page - Optimized Prompt - -```xml - -Create an improved park listing page for ThrillWiki that prioritizes user experience with intelligent filtering, real-time autocomplete search, and clean pagination. Build using Django Cotton templates, HTMX for dynamic interactions, and AlpineJS for reactive filtering components. Focus on accessibility, performance, and intuitive navigation without infinite scroll complexity. - -Key requirements: -- Fast, responsive autocomplete search leveraging available database fields -- Multi-criteria filtering with live updates based on existing Park model attributes -- Clean pagination with proper Django pagination controls -- Optimized park card layout using CloudFlare Images -- Accessible design following WCAG guidelines -- Mobile-first responsive approach - - - -Working with ThrillWiki's existing Django infrastructure: -- Unknown Park model structure - will need to examine current fields and relationships -- Potential integration with PostGIS if geographic data exists -- Unknown filtering criteria - will discover available Park attributes for filtering -- Unknown review/rating system - will check if rating data is available - -The page should integrate with: -- Django Cotton templating system for consistent components -- HTMX endpoints for search and filtering without full page reloads -- AlpineJS for client-side filter state management -- CloudFlare Images for optimized park images (if image fields exist) -- Existing ThrillWiki URL patterns and view structure - - - -Park listing page structure (adaptable based on discovered model fields): -```html - -
- - - - -
- -
-
-
- - -
- - - - -
- - - -``` - -Expected development approach: -1. Examine existing Park model to understand available fields -2. Identify searchable and filterable attributes -3. Design search/filter UI based on discovered data structure -4. Implement pagination with Django's built-in Paginator -5. Optimize queries and add HTMX interactions -
- - -Park (structure to be discovered), related models TBD -PostgreSQL full-text search, PostGIS if geographic fields exist -Django Cotton + HTMX + AlpineJS -CloudFlare Images (if image fields exist in Park model) -Traditional pagination with Django Paginator -WCAG 2.1 AA compliance -Park model fields, existing views/URLs, current template structure - - - -Since we don't know the Park model structure, the development approach needs to be discovery-first: - -1. **Model Discovery**: First step must be examining the Park model to understand: - - Available fields for display (name, description, etc.) - - Searchable text fields - - Filterable attributes (categories, status, etc.) - - Geographic data (if PostGIS integration exists) - - Image fields (for CloudFlare Images optimization) - - Relationship fields (foreign keys, many-to-many) - -2. **Search Strategy**: Build search functionality based on discovered text fields - - Use Django's full-text search capabilities - - Add PostGIS spatial search if location fields exist - - Implement autocomplete based on available searchable fields - -3. **Filter Design**: Create filters dynamically based on model attributes - - Categorical fields become dropdown/checkbox filters - - Numeric fields become range filters - - Boolean fields become toggle filters - - Date fields become date range filters - -4. **Display Optimization**: Design park cards using available fields - - Prioritize essential information (name, basic details) - - Use CloudFlare Images if image fields exist - - Handle cases where optional fields might be empty - -5. **Performance Considerations**: - - Use Django's select_related and prefetch_related based on discovered relationships - - Add database indexes for commonly searched/filtered fields - - Implement efficient pagination - -The checkpoint approach will be: -- Checkpoint 1: Discover and document Park model structure -- Checkpoint 2: Build basic listing with pagination -- Checkpoint 3: Add search functionality based on available fields -- Checkpoint 4: Implement filters based on model attributes -- Checkpoint 5: Add HTMX interactions and optimize performance -- Checkpoint 6: Polish UI/UX and add accessibility features - - - -1. **Discovery Phase**: Examine Park model, existing views, and current templates -2. **Basic Listing**: Create paginated park list with Django Cotton templates -3. **Search Implementation**: Add autocomplete search based on available text fields -4. **Filter System**: Build dynamic filters based on discovered model attributes -5. **HTMX Integration**: Add dynamic interactions without page reloads -6. **Optimization**: Performance tuning, image optimization, accessibility -7. **Testing**: Cross-browser testing, mobile responsiveness, user experience validation - - - -Before implementation, investigate: -1. What fields does the Park model contain? -2. Are there geographic/location fields that could leverage PostGIS? -3. What relationships exist (foreign keys to Location, Category, etc.)? -4. Is there a rating/review system connected to parks? -5. What image fields exist and how are they currently handled? -6. What existing views and URL patterns are in place? -7. What search functionality currently exists? -8. What Django Cotton components are already available? - -``` \ No newline at end of file diff --git a/attached_assets/Pasted--div-class-flex-gap-8-Left-Column-div-class-flex-1-space-y-1758510246168_1758510246168.txt b/attached_assets/Pasted--div-class-flex-gap-8-Left-Column-div-class-flex-1-space-y-1758510246168_1758510246168.txt deleted file mode 100644 index 01fc6ce3..00000000 --- a/attached_assets/Pasted--div-class-flex-gap-8-Left-Column-div-class-flex-1-space-y-1758510246168_1758510246168.txt +++ /dev/null @@ -1,55 +0,0 @@ -
- -
- - -
-

Parks

-

Explore theme parks worldwide

-
-
- - - -
-

Manufacturers

-

Ride and attraction manufacturers

-
-
- - - -
-

Operators

-

Theme park operating companies

-
-
-
- - -
- - -
-

Rides

-

Discover rides and attractions

-
-
- - - -
-

Designers

-

Ride designers and architects

-
-
- - - -
-

Top Lists

-

Community rankings and favorites

-
-
-
-
\ No newline at end of file diff --git a/attached_assets/Pasted-Alpine-components-script-is-loading-alpine-components-js-10-9-getEmbedInfo-content-js-388-11-NO-O-1758506533010_1758506533010.txt b/attached_assets/Pasted-Alpine-components-script-is-loading-alpine-components-js-10-9-getEmbedInfo-content-js-388-11-NO-O-1758506533010_1758506533010.txt deleted file mode 100644 index ad0aefa9..00000000 --- a/attached_assets/Pasted-Alpine-components-script-is-loading-alpine-components-js-10-9-getEmbedInfo-content-js-388-11-NO-O-1758506533010_1758506533010.txt +++ /dev/null @@ -1,74 +0,0 @@ -Alpine components script is loading... alpine-components.js:10:9 -getEmbedInfo content.js:388:11 -NO OEMBED content.js:456:11 -Registering Alpine.js components... alpine-components.js:24:11 -Alpine.js components registered successfully alpine-components.js:734:11 -downloadable font: Glyph bbox was incorrect (glyph ids 2 3 5 8 9 10 11 12 14 17 19 21 22 32 34 35 39 40 43 44 45 46 47 49 51 52 54 56 57 58 60 61 62 63 64 65 67 68 69 71 74 75 76 77 79 86 89 91 96 98 99 100 102 103 109 110 111 113 116 117 118 124 127 128 129 130 132 133 134 137 138 140 142 143 145 146 147 155 156 159 160 171 172 173 177 192 201 202 203 204 207 208 209 210 225 231 233 234 235 238 239 243 244 246 252 253 254 256 259 261 262 268 269 278 279 280 281 285 287 288 295 296 302 303 304 305 307 308 309 313 315 322 324 353 355 356 357 360 362 367 370 371 376 390 396 397 398 400 403 404 407 408 415 416 417 418 423 424 425 427 428 432 433 434 435 436 439 451 452 455 461 467 470 471 482 483 485 489 491 496 499 500 505 514 529 532 541 542 543 547 549 551 553 554 555 556 557 559 579 580 581 582 584 591 592 593 594 595 596 597 600 601 608 609 614 615 622 624 649 658 659 662 664 673 679 680 681 682 684 687 688 689 692 693 694 695 696 698 699 700 702 708 710 711 712 714 716 719 723 724 727 728 729 731 732 733 739 750 751 754 755 756 758 759 761 762 763 766 770 776 778 781 792 795 798 800 802 803 807 808 810 813 818 822 823 826 834 837 854 860 861 862 863 866 867 871 872 874 875 881 882 883 886 892 894 895 897 898 900 901 902 907 910 913 915 917 920 927 936 937 943 945 946 947 949 950 951 954 955 956 958 961 962 964 965 966 968 969 970 974 976 978 980 981 982 985 986 991 992 998 1000 1001 1007 1008 1009 1010 1014 1016 1018 1020 1022 1023 1024 1027 1028 1033 1034 1035 1036 1037 1040 1041 1044 1045 1047 1048 1049 1053 1054 1055 1056 1057 1059 1061 1063 1064 1065 1072 1074 1075 1078 1079 1080 1081 1085 1086 1087 1088 1093 1095 1099 1100 1111 1112 1115 1116 1117 1120 1121 1122 1123 1124 1125) (font-family: "Font Awesome 6 Free" style:normal weight:900 stretch:100 src index:0) source: https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/webfonts/fa-solid-900.woff2 -GET -https://d6d61dac-164d-45dd-929f-7dcdfd771b64-00-1bpe9dzxxnshv.worf.replit.dev/favicon.ico -[HTTP/1.1 404 Not Found 57ms] - -Error in parsing value for ‘-webkit-text-size-adjust’. Declaration dropped. tailwind.css:162:31 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:137:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:141:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:145:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:149:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:153:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:157:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:161:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:165:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:169:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:173:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:178:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:182:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:186:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:190:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:194:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:198:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:203:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:208:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:212:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:216:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:220:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:225:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:229:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:234:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:238:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:242:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:247:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:251:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:255:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:259:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:263:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:267:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:272:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:276:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:280:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:284:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:288:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:293:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:297:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:301:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:305:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:309:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:314:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:318:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:322:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:326:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:330:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:334:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:339:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:344:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:348:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:352:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:357:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:361:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:365:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:370:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:374:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:379:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:383:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:387:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:391:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:396:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:400:9 diff --git a/attached_assets/Pasted-Alpine-components-script-is-loading-alpine-components-js-10-9-getEmbedInfo-content-js-388-11-NO-O-1758506561782_1758506561783.txt b/attached_assets/Pasted-Alpine-components-script-is-loading-alpine-components-js-10-9-getEmbedInfo-content-js-388-11-NO-O-1758506561782_1758506561783.txt deleted file mode 100644 index ad0aefa9..00000000 --- a/attached_assets/Pasted-Alpine-components-script-is-loading-alpine-components-js-10-9-getEmbedInfo-content-js-388-11-NO-O-1758506561782_1758506561783.txt +++ /dev/null @@ -1,74 +0,0 @@ -Alpine components script is loading... alpine-components.js:10:9 -getEmbedInfo content.js:388:11 -NO OEMBED content.js:456:11 -Registering Alpine.js components... alpine-components.js:24:11 -Alpine.js components registered successfully alpine-components.js:734:11 -downloadable font: Glyph bbox was incorrect (glyph ids 2 3 5 8 9 10 11 12 14 17 19 21 22 32 34 35 39 40 43 44 45 46 47 49 51 52 54 56 57 58 60 61 62 63 64 65 67 68 69 71 74 75 76 77 79 86 89 91 96 98 99 100 102 103 109 110 111 113 116 117 118 124 127 128 129 130 132 133 134 137 138 140 142 143 145 146 147 155 156 159 160 171 172 173 177 192 201 202 203 204 207 208 209 210 225 231 233 234 235 238 239 243 244 246 252 253 254 256 259 261 262 268 269 278 279 280 281 285 287 288 295 296 302 303 304 305 307 308 309 313 315 322 324 353 355 356 357 360 362 367 370 371 376 390 396 397 398 400 403 404 407 408 415 416 417 418 423 424 425 427 428 432 433 434 435 436 439 451 452 455 461 467 470 471 482 483 485 489 491 496 499 500 505 514 529 532 541 542 543 547 549 551 553 554 555 556 557 559 579 580 581 582 584 591 592 593 594 595 596 597 600 601 608 609 614 615 622 624 649 658 659 662 664 673 679 680 681 682 684 687 688 689 692 693 694 695 696 698 699 700 702 708 710 711 712 714 716 719 723 724 727 728 729 731 732 733 739 750 751 754 755 756 758 759 761 762 763 766 770 776 778 781 792 795 798 800 802 803 807 808 810 813 818 822 823 826 834 837 854 860 861 862 863 866 867 871 872 874 875 881 882 883 886 892 894 895 897 898 900 901 902 907 910 913 915 917 920 927 936 937 943 945 946 947 949 950 951 954 955 956 958 961 962 964 965 966 968 969 970 974 976 978 980 981 982 985 986 991 992 998 1000 1001 1007 1008 1009 1010 1014 1016 1018 1020 1022 1023 1024 1027 1028 1033 1034 1035 1036 1037 1040 1041 1044 1045 1047 1048 1049 1053 1054 1055 1056 1057 1059 1061 1063 1064 1065 1072 1074 1075 1078 1079 1080 1081 1085 1086 1087 1088 1093 1095 1099 1100 1111 1112 1115 1116 1117 1120 1121 1122 1123 1124 1125) (font-family: "Font Awesome 6 Free" style:normal weight:900 stretch:100 src index:0) source: https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/webfonts/fa-solid-900.woff2 -GET -https://d6d61dac-164d-45dd-929f-7dcdfd771b64-00-1bpe9dzxxnshv.worf.replit.dev/favicon.ico -[HTTP/1.1 404 Not Found 57ms] - -Error in parsing value for ‘-webkit-text-size-adjust’. Declaration dropped. tailwind.css:162:31 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:137:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:141:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:145:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:149:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:153:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:157:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:161:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:165:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:169:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:173:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:178:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:182:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:186:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:190:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:194:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:198:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:203:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:208:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:212:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:216:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:220:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:225:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:229:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:234:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:238:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:242:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:247:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:251:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:255:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:259:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:263:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:267:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:272:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:276:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:280:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:284:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:288:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:293:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:297:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:301:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:305:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:309:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:314:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:318:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:322:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:326:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:330:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:334:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:339:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:344:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:348:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:352:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:357:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:361:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:365:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:370:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:374:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:379:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:383:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:387:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:391:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:396:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:400:9 diff --git a/attached_assets/Pasted-Environment-Request-Method-GET-Request-URL-http-d6d61dac-164d-45dd-929f-7dcdfd771b64-00-1bpe9-1758416867853_1758416867853.txt b/attached_assets/Pasted-Environment-Request-Method-GET-Request-URL-http-d6d61dac-164d-45dd-929f-7dcdfd771b64-00-1bpe9-1758416867853_1758416867853.txt deleted file mode 100644 index b3d45c21..00000000 --- a/attached_assets/Pasted-Environment-Request-Method-GET-Request-URL-http-d6d61dac-164d-45dd-929f-7dcdfd771b64-00-1bpe9-1758416867853_1758416867853.txt +++ /dev/null @@ -1,134 +0,0 @@ -Environment: - - -Request Method: GET -Request URL: http://d6d61dac-164d-45dd-929f-7dcdfd771b64-00-1bpe9dzxxnshv.worf.replit.dev/ - -Django Version: 5.2.6 -Python Version: 3.13.5 -Installed Applications: -['django.contrib.admin', - 'django.contrib.auth', - 'django.contrib.contenttypes', - 'django.contrib.sessions', - 'django.contrib.messages', - 'django.contrib.staticfiles', - 'django.contrib.sites', - 'django_cloudflareimages_toolkit', - 'rest_framework', - 'rest_framework.authtoken', - 'rest_framework_simplejwt', - 'rest_framework_simplejwt.token_blacklist', - 'dj_rest_auth', - 'dj_rest_auth.registration', - 'drf_spectacular', - 'corsheaders', - 'pghistory', - 'pgtrigger', - 'allauth', - 'allauth.account', - 'allauth.socialaccount', - 'allauth.socialaccount.providers.google', - 'allauth.socialaccount.providers.discord', - 'django_cleanup', - 'django_filters', - 'django_htmx', - 'whitenoise', - 'django_tailwind_cli', - 'autocomplete', - 'health_check', - 'health_check.db', - 'health_check.cache', - 'health_check.storage', - 'health_check.contrib.migrations', - 'health_check.contrib.redis', - 'django_celery_beat', - 'django_celery_results', - 'django_extensions', - 'apps.core', - 'apps.accounts', - 'apps.parks', - 'apps.rides', - 'api', - 'django_forwardemail', - 'apps.moderation', - 'nplusone.ext.django', - 'widget_tweaks'] -Installed Middleware: -['django.middleware.cache.UpdateCacheMiddleware', - 'core.middleware.request_logging.RequestLoggingMiddleware', - 'core.middleware.nextjs.APIResponseMiddleware', - 'core.middleware.performance_middleware.QueryCountMiddleware', - 'core.middleware.performance_middleware.PerformanceMiddleware', - 'nplusone.ext.django.NPlusOneMiddleware', - 'corsheaders.middleware.CorsMiddleware', - 'django.middleware.security.SecurityMiddleware', - 'whitenoise.middleware.WhiteNoiseMiddleware', - 'django.contrib.sessions.middleware.SessionMiddleware', - 'django.middleware.common.CommonMiddleware', - 'django.middleware.csrf.CsrfViewMiddleware', - 'django.contrib.auth.middleware.AuthenticationMiddleware', - 'django.contrib.messages.middleware.MessageMiddleware', - 'django.middleware.clickjacking.XFrameOptionsMiddleware', - 'apps.core.middleware.analytics.PgHistoryContextMiddleware', - 'allauth.account.middleware.AccountMiddleware', - 'django.middleware.cache.FetchFromCacheMiddleware', - 'django_htmx.middleware.HtmxMiddleware'] - - - -Traceback (most recent call last): - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/django/core/handlers/exception.py", line 55, in inner - response = get_response(request) - ^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/django/core/handlers/base.py", line 197, in _get_response - response = wrapped_callback(request, *callback_args, **callback_kwargs) - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/django/views/generic/base.py", line 105, in view - return self.dispatch(request, *args, **kwargs) - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/django/views/generic/base.py", line 144, in dispatch - return handler(request, *args, **kwargs) - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/django/views/generic/base.py", line 228, in get - context = self.get_context_data(**kwargs) - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/backend/thrillwiki/views.py", line 29, in get_context_data - "total_parks": Park.objects.count(), - ^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/django/db/models/manager.py", line 87, in manager_method - return getattr(self.get_queryset(), name)(*args, **kwargs) - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/django/db/models/query.py", line 604, in count - return self.query.get_count(using=self.db) - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/django/db/models/sql/query.py", line 644, in get_count - return obj.get_aggregation(using, {"__count": Count("*")})["__count"] - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/django/db/models/sql/query.py", line 626, in get_aggregation - result = compiler.execute_sql(SINGLE) - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/django/db/models/sql/compiler.py", line 1623, in execute_sql - cursor.execute(sql, params) - ^^^^^^^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/django/db/backends/utils.py", line 122, in execute - return super().execute(sql, params) - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/django/db/backends/utils.py", line 79, in execute - return self._execute_with_wrappers( - - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/django/db/backends/utils.py", line 92, in _execute_with_wrappers - return executor(sql, params, many, context) - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/pghistory/runtime.py", line 96, in _inject_history_context - if _can_inject_variable(context["cursor"], sql): - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/pghistory/runtime.py", line 77, in _can_inject_variable - and not _is_transaction_errored(cursor) - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/backend/.venv/lib/python3.13/site-packages/pghistory/runtime.py", line 51, in _is_transaction_errored - cursor.connection.get_transaction_status() - ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Exception Type: AttributeError at / -Exception Value: 'sqlite3.Connection' object has no attribute 'get_transaction_status' diff --git a/attached_assets/Pasted-Expected-declaration-but-found-apply-Skipped-to-next-declaration-alerts-css-3-11-Expected-decl-1758506850599_1758506850599.txt b/attached_assets/Pasted-Expected-declaration-but-found-apply-Skipped-to-next-declaration-alerts-css-3-11-Expected-decl-1758506850599_1758506850599.txt deleted file mode 100644 index 313c7523..00000000 --- a/attached_assets/Pasted-Expected-declaration-but-found-apply-Skipped-to-next-declaration-alerts-css-3-11-Expected-decl-1758506850599_1758506850599.txt +++ /dev/null @@ -1,92 +0,0 @@ -Expected declaration but found ‘@apply’. Skipped to next declaration. alerts.css:3:11 -Expected declaration but found ‘@apply’. Skipped to next declaration. alerts.css:8:11 -Expected declaration but found ‘@apply’. Skipped to next declaration. alerts.css:12:11 -Expected declaration but found ‘@apply’. Skipped to next declaration. alerts.css:16:11 -Expected declaration but found ‘@apply’. Skipped to next declaration. alerts.css:20:11 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:137:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:141:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:145:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:149:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:153:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:157:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:161:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:165:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:169:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:173:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:178:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:182:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:186:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:190:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:194:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:198:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:203:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:208:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:212:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:216:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:220:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:225:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:229:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:234:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:238:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:244:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:249:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:253:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:257:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:261:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:265:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:269:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:274:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:278:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:282:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:286:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:290:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:295:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:299:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:303:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:307:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:311:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:316:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:320:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:324:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:328:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:332:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:336:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:341:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:346:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:350:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:354:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:359:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:363:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:367:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:372:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:376:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:381:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:385:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:389:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:393:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:398:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:402:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:406:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:411:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:416:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:420:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:425:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:430:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:435:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:439:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:443:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:517:11 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:521:11 -Found invalid value for media feature. components.css:546:26 -getEmbedInfo content.js:388:11 -NO OEMBED content.js:456:11 -Error in parsing value for ‘-webkit-text-size-adjust’. Declaration dropped. tailwind.css:162:31 -Layout was forced before the page was fully loaded. If stylesheets are not yet loaded this may cause a flash of unstyled content. node.js:409:1 -Alpine components script is loading... alpine-components.js:10:9 -Registering Alpine.js components... alpine-components.js:24:11 -Alpine.js components registered successfully alpine-components.js:734:11 -GET -https://d6d61dac-164d-45dd-929f-7dcdfd771b64-00-1bpe9dzxxnshv.worf.replit.dev/favicon.ico -[HTTP/1.1 404 Not Found 56ms] - -downloadable font: Glyph bbox was incorrect (glyph ids 2 3 5 8 9 10 11 12 14 17 19 21 22 32 34 35 39 40 43 44 45 46 47 49 51 52 54 56 57 58 60 61 62 63 64 65 67 68 69 71 74 75 76 77 79 86 89 91 96 98 99 100 102 103 109 110 111 113 116 117 118 124 127 128 129 130 132 133 134 137 138 140 142 143 145 146 147 155 156 159 160 171 172 173 177 192 201 202 203 204 207 208 209 210 225 231 233 234 235 238 239 243 244 246 252 253 254 256 259 261 262 268 269 278 279 280 281 285 287 288 295 296 302 303 304 305 307 308 309 313 315 322 324 353 355 356 357 360 362 367 370 371 376 390 396 397 398 400 403 404 407 408 415 416 417 418 423 424 425 427 428 432 433 434 435 436 439 451 452 455 461 467 470 471 482 483 485 489 491 496 499 500 505 514 529 532 541 542 543 547 549 551 553 554 555 556 557 559 579 580 581 582 584 591 592 593 594 595 596 597 600 601 608 609 614 615 622 624 649 658 659 662 664 673 679 680 681 682 684 687 688 689 692 693 694 695 696 698 699 700 702 708 710 711 712 714 716 719 723 724 727 728 729 731 732 733 739 750 751 754 755 756 758 759 761 762 763 766 770 776 778 781 792 795 798 800 802 803 807 808 810 813 818 822 823 826 834 837 854 860 861 862 863 866 867 871 872 874 875 881 882 883 886 892 894 895 897 898 900 901 902 907 910 913 915 917 920 927 936 937 943 945 946 947 949 950 951 954 955 956 958 961 962 964 965 966 968 969 970 974 976 978 980 981 982 985 986 991 992 998 1000 1001 1007 1008 1009 1010 1014 1016 1018 1020 1022 1023 1024 1027 1028 1033 1034 1035 1036 1037 1040 1041 1044 1045 1047 1048 1049 1053 1054 1055 1056 1057 1059 1061 1063 1064 1065 1072 1074 1075 1078 1079 1080 1081 1085 1086 1087 1088 1093 1095 1099 1100 1111 1112 1115 1116 1117 1120 1121 1122 1123 1124 1125) (font-family: "Font Awesome 6 Free" style:normal weight:900 stretch:100 src index:0) source: https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/webfonts/fa-solid-900.woff2 diff --git a/attached_assets/Pasted-Expected-declaration-but-found-apply-Skipped-to-next-declaration-alerts-css-3-11-Expected-decl-1758506870792_1758506870792.txt b/attached_assets/Pasted-Expected-declaration-but-found-apply-Skipped-to-next-declaration-alerts-css-3-11-Expected-decl-1758506870792_1758506870792.txt deleted file mode 100644 index 313c7523..00000000 --- a/attached_assets/Pasted-Expected-declaration-but-found-apply-Skipped-to-next-declaration-alerts-css-3-11-Expected-decl-1758506870792_1758506870792.txt +++ /dev/null @@ -1,92 +0,0 @@ -Expected declaration but found ‘@apply’. Skipped to next declaration. alerts.css:3:11 -Expected declaration but found ‘@apply’. Skipped to next declaration. alerts.css:8:11 -Expected declaration but found ‘@apply’. Skipped to next declaration. alerts.css:12:11 -Expected declaration but found ‘@apply’. Skipped to next declaration. alerts.css:16:11 -Expected declaration but found ‘@apply’. Skipped to next declaration. alerts.css:20:11 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:137:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:141:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:145:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:149:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:153:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:157:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:161:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:165:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:169:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:173:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:178:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:182:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:186:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:190:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:194:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:198:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:203:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:208:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:212:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:216:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:220:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:225:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:229:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:234:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:238:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:244:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:249:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:253:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:257:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:261:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:265:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:269:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:274:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:278:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:282:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:286:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:290:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:295:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:299:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:303:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:307:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:311:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:316:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:320:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:324:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:328:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:332:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:336:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:341:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:346:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:350:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:354:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:359:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:363:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:367:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:372:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:376:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:381:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:385:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:389:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:393:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:398:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:402:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:406:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:411:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:416:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:420:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:425:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:430:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:435:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:439:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:443:9 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:517:11 -Expected declaration but found ‘@apply’. Skipped to next declaration. components.css:521:11 -Found invalid value for media feature. components.css:546:26 -getEmbedInfo content.js:388:11 -NO OEMBED content.js:456:11 -Error in parsing value for ‘-webkit-text-size-adjust’. Declaration dropped. tailwind.css:162:31 -Layout was forced before the page was fully loaded. If stylesheets are not yet loaded this may cause a flash of unstyled content. node.js:409:1 -Alpine components script is loading... alpine-components.js:10:9 -Registering Alpine.js components... alpine-components.js:24:11 -Alpine.js components registered successfully alpine-components.js:734:11 -GET -https://d6d61dac-164d-45dd-929f-7dcdfd771b64-00-1bpe9dzxxnshv.worf.replit.dev/favicon.ico -[HTTP/1.1 404 Not Found 56ms] - -downloadable font: Glyph bbox was incorrect (glyph ids 2 3 5 8 9 10 11 12 14 17 19 21 22 32 34 35 39 40 43 44 45 46 47 49 51 52 54 56 57 58 60 61 62 63 64 65 67 68 69 71 74 75 76 77 79 86 89 91 96 98 99 100 102 103 109 110 111 113 116 117 118 124 127 128 129 130 132 133 134 137 138 140 142 143 145 146 147 155 156 159 160 171 172 173 177 192 201 202 203 204 207 208 209 210 225 231 233 234 235 238 239 243 244 246 252 253 254 256 259 261 262 268 269 278 279 280 281 285 287 288 295 296 302 303 304 305 307 308 309 313 315 322 324 353 355 356 357 360 362 367 370 371 376 390 396 397 398 400 403 404 407 408 415 416 417 418 423 424 425 427 428 432 433 434 435 436 439 451 452 455 461 467 470 471 482 483 485 489 491 496 499 500 505 514 529 532 541 542 543 547 549 551 553 554 555 556 557 559 579 580 581 582 584 591 592 593 594 595 596 597 600 601 608 609 614 615 622 624 649 658 659 662 664 673 679 680 681 682 684 687 688 689 692 693 694 695 696 698 699 700 702 708 710 711 712 714 716 719 723 724 727 728 729 731 732 733 739 750 751 754 755 756 758 759 761 762 763 766 770 776 778 781 792 795 798 800 802 803 807 808 810 813 818 822 823 826 834 837 854 860 861 862 863 866 867 871 872 874 875 881 882 883 886 892 894 895 897 898 900 901 902 907 910 913 915 917 920 927 936 937 943 945 946 947 949 950 951 954 955 956 958 961 962 964 965 966 968 969 970 974 976 978 980 981 982 985 986 991 992 998 1000 1001 1007 1008 1009 1010 1014 1016 1018 1020 1022 1023 1024 1027 1028 1033 1034 1035 1036 1037 1040 1041 1044 1045 1047 1048 1049 1053 1054 1055 1056 1057 1059 1061 1063 1064 1065 1072 1074 1075 1078 1079 1080 1081 1085 1086 1087 1088 1093 1095 1099 1100 1111 1112 1115 1116 1117 1120 1121 1122 1123 1124 1125) (font-family: "Font Awesome 6 Free" style:normal weight:900 stretch:100 src index:0) source: https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/webfonts/fa-solid-900.woff2 diff --git a/attached_assets/Pasted-Found-invalid-value-for-media-feature-components-css-476-26-Error-in-parsing-value-for-webkit-tex-1758506974620_1758506974621.txt b/attached_assets/Pasted-Found-invalid-value-for-media-feature-components-css-476-26-Error-in-parsing-value-for-webkit-tex-1758506974620_1758506974621.txt deleted file mode 100644 index d43cba67..00000000 --- a/attached_assets/Pasted-Found-invalid-value-for-media-feature-components-css-476-26-Error-in-parsing-value-for-webkit-tex-1758506974620_1758506974621.txt +++ /dev/null @@ -1,12 +0,0 @@ -Found invalid value for media feature. components.css:476:26 -Error in parsing value for ‘-webkit-text-size-adjust’. Declaration dropped. tailwind.css:162:31 -Alpine components script is loading... alpine-components.js:10:9 -Registering Alpine.js components... alpine-components.js:24:11 -Alpine.js components registered successfully alpine-components.js:734:11 -getEmbedInfo content.js:388:11 -NO OEMBED content.js:456:11 -downloadable font: Glyph bbox was incorrect (glyph ids 2 3 5 8 9 10 11 12 14 17 19 21 22 32 34 35 39 40 43 44 45 46 47 49 51 52 54 56 57 58 60 61 62 63 64 65 67 68 69 71 74 75 76 77 79 86 89 91 96 98 99 100 102 103 109 110 111 113 116 117 118 124 127 128 129 130 132 133 134 137 138 140 142 143 145 146 147 155 156 159 160 171 172 173 177 192 201 202 203 204 207 208 209 210 225 231 233 234 235 238 239 243 244 246 252 253 254 256 259 261 262 268 269 278 279 280 281 285 287 288 295 296 302 303 304 305 307 308 309 313 315 322 324 353 355 356 357 360 362 367 370 371 376 390 396 397 398 400 403 404 407 408 415 416 417 418 423 424 425 427 428 432 433 434 435 436 439 451 452 455 461 467 470 471 482 483 485 489 491 496 499 500 505 514 529 532 541 542 543 547 549 551 553 554 555 556 557 559 579 580 581 582 584 591 592 593 594 595 596 597 600 601 608 609 614 615 622 624 649 658 659 662 664 673 679 680 681 682 684 687 688 689 692 693 694 695 696 698 699 700 702 708 710 711 712 714 716 719 723 724 727 728 729 731 732 733 739 750 751 754 755 756 758 759 761 762 763 766 770 776 778 781 792 795 798 800 802 803 807 808 810 813 818 822 823 826 834 837 854 860 861 862 863 866 867 871 872 874 875 881 882 883 886 892 894 895 897 898 900 901 902 907 910 913 915 917 920 927 936 937 943 945 946 947 949 950 951 954 955 956 958 961 962 964 965 966 968 969 970 974 976 978 980 981 982 985 986 991 992 998 1000 1001 1007 1008 1009 1010 1014 1016 1018 1020 1022 1023 1024 1027 1028 1033 1034 1035 1036 1037 1040 1041 1044 1045 1047 1048 1049 1053 1054 1055 1056 1057 1059 1061 1063 1064 1065 1072 1074 1075 1078 1079 1080 1081 1085 1086 1087 1088 1093 1095 1099 1100 1111 1112 1115 1116 1117 1120 1121 1122 1123 1124 1125) (font-family: "Font Awesome 6 Free" style:normal weight:900 stretch:100 src index:0) source: https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/webfonts/fa-solid-900.woff2 -GET -https://d6d61dac-164d-45dd-929f-7dcdfd771b64-00-1bpe9dzxxnshv.worf.replit.dev/favicon.ico -[HTTP/1.1 404 Not Found 58ms] - diff --git a/attached_assets/Pasted-Found-invalid-value-for-media-feature-components-css-476-26-Error-in-parsing-value-for-webkit-tex-1758506979647_1758506979648.txt b/attached_assets/Pasted-Found-invalid-value-for-media-feature-components-css-476-26-Error-in-parsing-value-for-webkit-tex-1758506979647_1758506979648.txt deleted file mode 100644 index d43cba67..00000000 --- a/attached_assets/Pasted-Found-invalid-value-for-media-feature-components-css-476-26-Error-in-parsing-value-for-webkit-tex-1758506979647_1758506979648.txt +++ /dev/null @@ -1,12 +0,0 @@ -Found invalid value for media feature. components.css:476:26 -Error in parsing value for ‘-webkit-text-size-adjust’. Declaration dropped. tailwind.css:162:31 -Alpine components script is loading... alpine-components.js:10:9 -Registering Alpine.js components... alpine-components.js:24:11 -Alpine.js components registered successfully alpine-components.js:734:11 -getEmbedInfo content.js:388:11 -NO OEMBED content.js:456:11 -downloadable font: Glyph bbox was incorrect (glyph ids 2 3 5 8 9 10 11 12 14 17 19 21 22 32 34 35 39 40 43 44 45 46 47 49 51 52 54 56 57 58 60 61 62 63 64 65 67 68 69 71 74 75 76 77 79 86 89 91 96 98 99 100 102 103 109 110 111 113 116 117 118 124 127 128 129 130 132 133 134 137 138 140 142 143 145 146 147 155 156 159 160 171 172 173 177 192 201 202 203 204 207 208 209 210 225 231 233 234 235 238 239 243 244 246 252 253 254 256 259 261 262 268 269 278 279 280 281 285 287 288 295 296 302 303 304 305 307 308 309 313 315 322 324 353 355 356 357 360 362 367 370 371 376 390 396 397 398 400 403 404 407 408 415 416 417 418 423 424 425 427 428 432 433 434 435 436 439 451 452 455 461 467 470 471 482 483 485 489 491 496 499 500 505 514 529 532 541 542 543 547 549 551 553 554 555 556 557 559 579 580 581 582 584 591 592 593 594 595 596 597 600 601 608 609 614 615 622 624 649 658 659 662 664 673 679 680 681 682 684 687 688 689 692 693 694 695 696 698 699 700 702 708 710 711 712 714 716 719 723 724 727 728 729 731 732 733 739 750 751 754 755 756 758 759 761 762 763 766 770 776 778 781 792 795 798 800 802 803 807 808 810 813 818 822 823 826 834 837 854 860 861 862 863 866 867 871 872 874 875 881 882 883 886 892 894 895 897 898 900 901 902 907 910 913 915 917 920 927 936 937 943 945 946 947 949 950 951 954 955 956 958 961 962 964 965 966 968 969 970 974 976 978 980 981 982 985 986 991 992 998 1000 1001 1007 1008 1009 1010 1014 1016 1018 1020 1022 1023 1024 1027 1028 1033 1034 1035 1036 1037 1040 1041 1044 1045 1047 1048 1049 1053 1054 1055 1056 1057 1059 1061 1063 1064 1065 1072 1074 1075 1078 1079 1080 1081 1085 1086 1087 1088 1093 1095 1099 1100 1111 1112 1115 1116 1117 1120 1121 1122 1123 1124 1125) (font-family: "Font Awesome 6 Free" style:normal weight:900 stretch:100 src index:0) source: https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/webfonts/fa-solid-900.woff2 -GET -https://d6d61dac-164d-45dd-929f-7dcdfd771b64-00-1bpe9dzxxnshv.worf.replit.dev/favicon.ico -[HTTP/1.1 404 Not Found 58ms] - diff --git a/attached_assets/Pasted-Traceback-most-recent-call-last-File-home-runner-workspace-venv-lib-python3-13-site-packages-1758551531707_1758551531707.txt b/attached_assets/Pasted-Traceback-most-recent-call-last-File-home-runner-workspace-venv-lib-python3-13-site-packages-1758551531707_1758551531707.txt deleted file mode 100644 index d99eb1aa..00000000 --- a/attached_assets/Pasted-Traceback-most-recent-call-last-File-home-runner-workspace-venv-lib-python3-13-site-packages-1758551531707_1758551531707.txt +++ /dev/null @@ -1,116 +0,0 @@ -Traceback (most recent call last): - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/contrib/staticfiles/handlers.py", line 80, in __call__ - return self.application(environ, start_response) - ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/core/handlers/wsgi.py", line 124, in __call__ - response = self.get_response(request) - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/core/handlers/base.py", line 140, in get_response - response = self._middleware_chain(request) - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/core/handlers/exception.py", line 57, in inner - response = response_for_exception(request, exc) - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/core/handlers/exception.py", line 141, in response_for_exception - response = handle_uncaught_exception( - - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/core/handlers/exception.py", line 182, in handle_uncaught_exception - return debug.technical_500_response(request, *exc_info) - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django_extensions/management/technical_response.py", line 41, in null_technical_500_response - raise exc_value.with_traceback(tb) - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/core/handlers/exception.py", line 55, in inner - response = get_response(request) - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/core/handlers/base.py", line 220, in _get_response - response = response.render() - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/response.py", line 114, in render - self.content = self.rendered_content - ^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/response.py", line 92, in rendered_content - return template.render(context, self._request) - ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/backends/django.py", line 107, in render - return self.template.render(context) - ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 171, in render - return self._render(context) - ~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 163, in _render - return self.nodelist.render(context) - ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 1016, in render - return SafeString("".join([node.render_annotated(context) for node in self])) - ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 977, in render_annotated - return self.render(context) - ~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/loader_tags.py", line 159, in render - return compiled_parent._render(context) - ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 163, in _render - return self.nodelist.render(context) - ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 1016, in render - return SafeString("".join([node.render_annotated(context) for node in self])) - ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 977, in render_annotated - return self.render(context) - ~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/loader_tags.py", line 65, in render - result = block.nodelist.render(context) - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 1016, in render - return SafeString("".join([node.render_annotated(context) for node in self])) - ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 977, in render_annotated - return self.render(context) - ~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/defaulttags.py", line 243, in render - nodelist.append(node.render_annotated(context)) - ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 977, in render_annotated - return self.render(context) - ~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django_cotton/templatetags/_component.py", line 86, in render - output = template.render(context) - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 173, in render - return self._render(context) - ~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 163, in _render - return self.nodelist.render(context) - ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 1016, in render - return SafeString("".join([node.render_annotated(context) for node in self])) - ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 977, in render_annotated - return self.render(context) - ~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django_cotton/templatetags/_vars.py", line 52, in render - output = self.nodelist.render(context) - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 1016, in render - return SafeString("".join([node.render_annotated(context) for node in self])) - ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 977, in render_annotated - return self.render(context) - ~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/defaulttags.py", line 327, in render - return nodelist.render(context) - ~~~~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 1016, in render - return SafeString("".join([node.render_annotated(context) for node in self])) - ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 977, in render_annotated - return self.render(context) - ~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/defaulttags.py", line 327, in render - return nodelist.render(context) - ~~~~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 1016, in render - return SafeString("".join([node.render_annotated(context) for node in self])) - ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/base.py", line 977, in render_annotated - return self.render(context) - ~~~~~~~~~~~^^^^^^^^^ - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/template/defaulttags.py", line 480, in render - url = reverse(view_name, args=args, kwargs=kwargs, current_app=current_app) - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/urls/base.py", line 98, in reverse - resolved_url = resolver._reverse_with_prefix(view, prefix, *args, **kwargs) - File "/home/runner/workspace/.venv/lib/python3.13/site-packages/django/urls/resolvers.py", line 831, in _reverse_with_prefix - raise NoReverseMatch(msg) -django.urls.exceptions.NoReverseMatch: Reverse for 'park_detail' with arguments '('',)' not found. 1 pattern(s) tried: ['parks/(?P[-a-zA-Z0-9_]+)/\\Z'] \ No newline at end of file diff --git a/attached_assets/SCR-20250921-moio_1758477961641.png b/attached_assets/SCR-20250921-moio_1758477961641.png deleted file mode 100644 index edf2475a..00000000 Binary files a/attached_assets/SCR-20250921-moio_1758477961641.png and /dev/null differ diff --git a/attached_assets/SCR-20250921-swjf_1758505678681.png b/attached_assets/SCR-20250921-swjf_1758505678681.png deleted file mode 100644 index 91075718..00000000 Binary files a/attached_assets/SCR-20250921-swjf_1758505678681.png and /dev/null differ diff --git a/attached_assets/image_1758463993565.png b/attached_assets/image_1758463993565.png deleted file mode 100644 index acda0a96..00000000 Binary files a/attached_assets/image_1758463993565.png and /dev/null differ diff --git a/attached_assets/targeted_element_1758419719251.png b/attached_assets/targeted_element_1758419719251.png deleted file mode 100644 index 09eede07..00000000 Binary files a/attached_assets/targeted_element_1758419719251.png and /dev/null differ diff --git a/attached_assets/targeted_element_1758476700613.png b/attached_assets/targeted_element_1758476700613.png deleted file mode 100644 index cf12d1c0..00000000 Binary files a/attached_assets/targeted_element_1758476700613.png and /dev/null differ diff --git a/attached_assets/targeted_element_1758478351483.png b/attached_assets/targeted_element_1758478351483.png deleted file mode 100644 index 83ce0631..00000000 Binary files a/attached_assets/targeted_element_1758478351483.png and /dev/null differ diff --git a/memory-bank/analysis/current-state-analysis.md b/memory-bank/analysis/current-state-analysis.md deleted file mode 100644 index 112574eb..00000000 --- a/memory-bank/analysis/current-state-analysis.md +++ /dev/null @@ -1,187 +0,0 @@ -# Current State Analysis: ThrillWiki Frontend - -## Analysis Summary -ThrillWiki is a mature Django application with existing HTMX and Alpine.js implementation. The current frontend shows good foundational patterns but has opportunities for modernization and enhancement. - -## Current Frontend Architecture - -### Technology Stack -- **HTMX**: v1.9.6 (CDN) -- **Alpine.js**: Local minified version -- **Tailwind CSS**: Custom build with hot reload -- **Font Awesome**: v6.0.0 (CDN) -- **Google Fonts**: Poppins font family - -### Base Template Analysis (`templates/base/base.html`) - -#### Strengths -- Modern responsive design with Tailwind CSS -- Dark mode support with localStorage persistence -- Proper CSRF token handling -- Semantic HTML structure -- Accessibility considerations (ARIA labels) -- Mobile-first responsive navigation -- Alpine.js transitions for smooth UX - -#### Current Patterns -- **Theme System**: Dark/light mode with system preference detection -- **Navigation**: Sticky header with backdrop blur effects -- **User Authentication**: Modal-based login/signup via HTMX -- **Dropdown Menus**: Alpine.js powered with transitions -- **Mobile Menu**: Responsive hamburger menu -- **Flash Messages**: Fixed positioning with alert system - -#### CSS Architecture -- Gradient backgrounds for visual appeal -- Custom CSS variables for theming -- Tailwind utility classes for rapid development -- Custom dropdown and indicator styles -- HTMX loading indicators - -### HTMX Implementation Patterns - -#### Current Usage -- **Dynamic Content Loading**: Park list filtering and search -- **Modal Management**: Login/signup forms loaded dynamically -- **Form Submissions**: Real-time filtering without page refresh -- **URL Management**: `hx-push-url="true"` for browser history -- **Target Swapping**: Specific element updates (`hx-target`) - -#### HTMX Triggers -- `hx-trigger="load"` for initial content loading -- `hx-trigger="change from:select"` for form elements -- `hx-trigger="input delay:500ms"` for debounced search -- `hx-trigger="click from:.status-filter"` for button interactions - -### Alpine.js Implementation Patterns - -#### Current Usage -- **Dropdown Management**: `x-data="{ open: false }"` pattern -- **Location Search**: Complex autocomplete functionality -- **Transitions**: Smooth show/hide animations -- **Click Outside**: `@click.outside` for closing dropdowns -- **Event Handling**: `@click`, `@input.debounce` patterns - -#### Alpine.js Components -- **locationSearch()**: Reusable autocomplete component -- **Dropdown menus**: User profile and auth menus -- **Theme toggle**: Dark mode switching - -### Template Structure Analysis - -#### Parks List Template (`templates/parks/park_list.html`) - -**Strengths:** -- Comprehensive filtering system (search, location, status) -- Real-time updates via HTMX -- Responsive grid layout -- Status badge system with visual indicators -- Location autocomplete with API integration - -**Current Patterns:** -- Form-based filtering with HTMX integration -- Alpine.js for complex interactions (location search) -- Mixed JavaScript functions for status toggling -- Hidden input management for multi-select filters - -**Areas for Improvement:** -- Mixed Alpine.js and vanilla JS patterns -- Complex inline JavaScript in templates -- Status filter logic could be more Alpine.js native -- Form state management could be centralized - -## Model Relationships Analysis - -### Core Entities -- **Parks**: Central entity with operators, locations, status -- **Rides**: Belong to parks, have manufacturers/designers -- **Operators**: Companies operating parks -- **Manufacturers**: Companies making rides -- **Designers**: Entities designing rides -- **Reviews**: User-generated content -- **Media**: Photo management system - -### Entity Relationships (from .clinerules) -- Parks → Operators (required) -- Parks → PropertyOwners (optional) -- Rides → Parks (required) -- Rides → Manufacturers (optional) -- Rides → Designers (optional) - -## Current Functionality Assessment - -### Implemented Features -- **Park Management**: CRUD operations with filtering -- **Ride Management**: Complex forms with conditional fields -- **User Authentication**: Modal-based login/signup -- **Search System**: Global and entity-specific search -- **Photo Management**: Upload and gallery systems -- **Location Services**: Geocoding and autocomplete -- **Moderation System**: Content approval workflows -- **Review System**: User ratings and comments - -### HTMX Integration Points -- Dynamic form loading and submission -- Real-time filtering and search -- Modal management for auth flows -- Partial template updates -- URL state management - -### Alpine.js Integration Points -- Interactive dropdowns and menus -- Location autocomplete components -- Theme switching -- Form state management -- Transition animations - -## Pain Points Identified - -### Technical Debt -1. **Mixed JavaScript Patterns**: Combination of Alpine.js and vanilla JS -2. **Inline Scripts**: JavaScript embedded in templates -3. **Component Reusability**: Limited reusable component patterns -4. **State Management**: Scattered state across components -5. **Form Validation**: Basic validation, could be enhanced - -### User Experience Issues -1. **Loading States**: Limited loading indicators -2. **Error Handling**: Basic error messaging -3. **Mobile Experience**: Could be enhanced -4. **Accessibility**: Good foundation but could be improved -5. **Performance**: Multiple CDN dependencies - -### Design System Gaps -1. **Component Library**: No formal component system -2. **Design Tokens**: Limited CSS custom properties -3. **Animation System**: Basic transitions only -4. **Typography Scale**: Single font family -5. **Color System**: Basic Tailwind colors - -## Improvement Opportunities - -### High Priority -1. **Unified JavaScript Architecture**: Standardize on Alpine.js patterns -2. **Component System**: Create reusable UI components -3. **Enhanced Loading States**: Better user feedback -4. **Form Validation**: Real-time validation with Alpine.js -5. **Error Handling**: Comprehensive error management - -### Medium Priority -1. **Design System**: Formal component library -2. **Performance**: Optimize bundle sizes -3. **Accessibility**: Enhanced ARIA support -4. **Mobile Experience**: Touch-friendly interactions -5. **Animation System**: Micro-interactions and transitions - -### Low Priority -1. **Advanced HTMX**: Server-sent events, WebSocket integration -2. **Progressive Enhancement**: Offline capabilities -3. **Advanced Search**: Faceted search interface -4. **Data Visualization**: Charts and analytics -5. **Internationalization**: Multi-language support - -## Next Steps -1. Research modern UI/UX patterns using context7 -2. Study HTMX best practices and advanced techniques -3. Investigate Alpine.js optimization strategies -4. Plan new template architecture based on findings \ No newline at end of file diff --git a/memory-bank/projects/django-to-symfony-conversion/01-source-analysis-overview.md b/memory-bank/projects/django-to-symfony-conversion/01-source-analysis-overview.md deleted file mode 100644 index 85d42b62..00000000 --- a/memory-bank/projects/django-to-symfony-conversion/01-source-analysis-overview.md +++ /dev/null @@ -1,495 +0,0 @@ -# Django ThrillWiki Source Analysis - Symfony Conversion Foundation - -**Date:** January 7, 2025 -**Analyst:** Roo (Architect Mode) -**Purpose:** Complete analysis of Django ThrillWiki for Symfony conversion planning -**Status:** Source Analysis Phase - Complete Foundation Documentation - -## Executive Summary - -This document provides a comprehensive analysis of the current Django ThrillWiki implementation to serve as the definitive source for planning and executing a Symfony conversion. The analysis covers all architectural layers, entity relationships, features, and implementation patterns that must be replicated or adapted in Symfony. - -## Project Overview - -ThrillWiki is a sophisticated Django-based theme park and ride database application featuring: - -- **18 Django Apps** with distinct responsibilities -- **PostgreSQL + PostGIS** for geographic data -- **HTMX + Tailwind CSS** for modern frontend interactions -- **Comprehensive history tracking** via django-pghistory -- **User-generated content** with moderation workflows -- **Social authentication** and role-based access control -- **Advanced search** and autocomplete functionality -- **Media management** with approval workflows - -## Source Architecture Analysis - -### Core Framework Stack - -``` -Django 5.0+ (Python 3.11+) -├── Database: PostgreSQL + PostGIS -├── Frontend: HTMX + Tailwind CSS + Alpine.js -├── Authentication: django-allauth (Google, Discord) -├── History: django-pghistory + pgtrigger -├── Media: Pillow + django-cleanup -├── Testing: Playwright + pytest -└── Package Management: UV -``` - -### Django Apps Architecture - -#### **Core Entity Apps (Business Logic)** -1. **parks** - Theme park management with geographic location -2. **rides** - Ride database with detailed specifications -3. **operators** - Companies that operate parks -4. **property_owners** - Companies that own park property -5. **manufacturers** - Companies that manufacture rides -6. **designers** - Companies/individuals that design rides - -#### **User Management Apps** -7. **accounts** - Extended User model with profiles and top lists -8. **reviews** - User review system with ratings and photos - -#### **Content Management Apps** -9. **media** - Photo management with approval workflow -10. **moderation** - Content moderation and submission system - -#### **Supporting Service Apps** -11. **location** - Geographic services with PostGIS -12. **analytics** - Page view tracking and trending content -13. **search** - Global search across all content types -14. **history_tracking** - Change tracking and audit trails -15. **email_service** - Email management and notifications - -#### **Infrastructure Apps** -16. **core** - Shared utilities and base classes -17. **avatars** - User avatar management -18. **history** - History visualization and timeline - -## Entity Relationship Model - -### Primary Entities & Relationships - -```mermaid -erDiagram - Park ||--|| Operator : "operated_by (required)" - Park ||--o| PropertyOwner : "owned_by (optional)" - Park ||--o{ ParkArea : "contains" - Park ||--o{ Ride : "hosts" - Park ||--o{ Location : "located_at" - Park ||--o{ Photo : "has_photos" - Park ||--o{ Review : "has_reviews" - - Ride ||--|| Park : "belongs_to (required)" - Ride ||--o| ParkArea : "located_in" - Ride ||--o| Manufacturer : "manufactured_by" - Ride ||--o| Designer : "designed_by" - Ride ||--o| RideModel : "instance_of" - Ride ||--o| RollerCoasterStats : "has_stats" - - User ||--|| UserProfile : "has_profile" - User ||--o{ Review : "writes" - User ||--o{ TopList : "creates" - User ||--o{ EditSubmission : "submits" - User ||--o{ PhotoSubmission : "uploads" - - RideModel ||--o| Manufacturer : "manufactured_by" - RideModel ||--o{ Ride : "installed_as" -``` - -### Key Entity Definitions (Per .clinerules) - -- **Parks MUST** have an Operator (required relationship) -- **Parks MAY** have a PropertyOwner (optional, usually same as Operator) -- **Rides MUST** belong to a Park (required relationship) -- **Rides MAY** have Manufacturer/Designer (optional relationships) -- **Operators/PropertyOwners/Manufacturers/Designers** are distinct entity types -- **No direct Company entity references** (replaced by specific entity types) - -## Django-Specific Implementation Patterns - -### 1. Model Architecture Patterns - -#### **TrackedModel Base Class** -```python -@pghistory.track() -class Park(TrackedModel): - # Automatic history tracking for all changes - # Slug management with historical preservation - # Generic relations for photos/reviews/locations -``` - -#### **Generic Foreign Keys** -```python -# Photos can be attached to any model -photos = GenericRelation(Photo, related_query_name='park') - -# Reviews can be for parks, rides, etc. -content_type = models.ForeignKey(ContentType) -object_id = models.PositiveIntegerField() -content_object = GenericForeignKey('content_type', 'object_id') -``` - -#### **PostGIS Geographic Fields** -```python -# Location model with geographic data -location = models.PointField(geography=True, null=True, blank=True) -coordinates = models.JSONField(default=dict, blank=True) # Legacy support -``` - -### 2. Authentication & Authorization - -#### **Extended User Model** -```python -class User(AbstractUser): - ROLE_CHOICES = [ - ('USER', 'User'), - ('MODERATOR', 'Moderator'), - ('ADMIN', 'Admin'), - ('SUPERUSER', 'Superuser'), - ] - role = models.CharField(max_length=20, choices=ROLE_CHOICES, default='USER') - user_id = models.CharField(max_length=20, unique=True) # Public ID -``` - -#### **Social Authentication** -- Google OAuth2 integration -- Discord OAuth2 integration -- Turnstile CAPTCHA protection -- Email verification workflows - -### 3. Frontend Architecture - -#### **HTMX Integration** -```python -# HTMX-aware views -def search_suggestions(request): - if request.htmx: - return render(request, 'search/partials/suggestions.html', context) - return render(request, 'search/full_page.html', context) -``` - -#### **Template Organization** -``` -templates/ -├── base/ - Base layouts and components -├── [app]/ - App-specific templates -│ └── partials/ - HTMX partial templates -├── account/ - Authentication templates -└── pages/ - Static pages -``` - -### 4. Content Moderation System - -#### **Submission Workflow** -```python -class EditSubmission(models.Model): - STATUS_CHOICES = [ - ('PENDING', 'Pending Review'), - ('APPROVED', 'Approved'), - ('REJECTED', 'Rejected'), - ('ESCALATED', 'Escalated'), - ] - # Auto-approval for moderators - # Duplicate detection - # Change tracking -``` - -### 5. Media Management - -#### **Photo Model with Approval** -```python -class Photo(models.Model): - # Generic foreign key for any model association - # EXIF data extraction - # Approval workflow - # Custom storage backend - # Automatic file organization -``` - -## Database Schema Analysis - -### Key Tables Structure - -#### **Core Content Tables** -- `parks_park` - Main park entity -- `parks_parkarea` - Park themed areas -- `rides_ride` - Individual ride installations -- `rides_ridemodel` - Manufacturer ride types -- `rides_rollercoasterstats` - Detailed coaster specs - -#### **Entity Relationship Tables** -- `operators_operator` - Park operating companies -- `property_owners_propertyowner` - Property ownership -- `manufacturers_manufacturer` - Ride manufacturers -- `designers_designer` - Ride designers - -#### **User & Content Tables** -- `accounts_user` - Extended Django user -- `accounts_userprofile` - User profiles and stats -- `media_photo` - Generic photo storage -- `reviews_review` - User reviews with ratings -- `moderation_editsubmission` - Content submissions - -#### **Supporting Tables** -- `location_location` - Geographic data with PostGIS -- `analytics_pageview` - Usage tracking -- `history_tracking_*` - Change audit trails - -#### **History Tables (pghistory)** -- `*_*event` - Automatic history tracking for all models -- Complete audit trail of all changes -- Trigger-based implementation - -## URL Structure Analysis - -### Main URL Patterns -``` -/ - Home with trending content -/admin/ - Django admin interface -/parks/{slug}/ - Park detail pages -/rides/{slug}/ - Ride detail pages -/operators/{slug}/ - Operator profiles -/manufacturers/{slug}/ - Manufacturer profiles -/designers/{slug}/ - Designer profiles -/search/ - Global search interface -/ac/ - Autocomplete endpoints (HTMX) -/accounts/ - User authentication -/moderation/ - Content moderation -/history/ - Change history timeline -``` - -### SEO & Routing Features -- SEO-friendly slugs for all content -- Historical slug support with automatic redirects -- HTMX-compatible partial endpoints -- RESTful resource organization - -## Form System Analysis - -### Key Form Types -1. **Authentication Forms** - Login/signup with Turnstile CAPTCHA -2. **Content Forms** - Park/ride creation and editing -3. **Upload Forms** - Photo uploads with validation -4. **Review Forms** - User rating and review submission -5. **Moderation Forms** - Edit approval workflows - -### Form Features -- HTMX integration for dynamic interactions -- Comprehensive server-side validation -- File upload handling with security -- CSRF protection throughout - -## Search & Autocomplete System - -### Search Implementation -```python -# Global search across multiple models -def global_search(query): - parks = Park.objects.filter(name__icontains=query) - rides = Ride.objects.filter(name__icontains=query) - operators = Operator.objects.filter(name__icontains=query) - # Combine and rank results -``` - -### Autocomplete Features -- HTMX-powered suggestions -- Real-time search as you type -- Multiple entity type support -- Configurable result limits - -## Dependencies & Packages - -### Core Django Packages -```toml -Django = "^5.0" -psycopg2-binary = ">=2.9.9" # PostgreSQL adapter -django-allauth = ">=0.60.1" # Social auth -django-pghistory = ">=3.5.2" # History tracking -django-htmx = ">=1.17.2" # HTMX integration -django-cleanup = ">=8.0.0" # File cleanup -django-filter = ">=23.5" # Advanced filtering -whitenoise = ">=6.6.0" # Static file serving -``` - -### Geographic & Media -```toml -# PostGIS support requires system libraries: -# GDAL_LIBRARY_PATH, GEOS_LIBRARY_PATH -Pillow = ">=10.2.0" # Image processing -``` - -### Development & Testing -```toml -playwright = ">=1.41.0" # E2E testing -pytest-django = ">=4.9.0" # Unit testing -django-tailwind-cli = ">=2.21.1" # CSS framework -``` - -## Key Django Features Utilized - -### 1. **Admin Interface** -- Heavily customized admin for all models -- Bulk operations and advanced filtering -- Moderation workflow integration -- History tracking display - -### 2. **Middleware Stack** -```python -MIDDLEWARE = [ - 'django.middleware.cache.UpdateCacheMiddleware', - 'whitenoise.middleware.WhiteNoiseMiddleware', - 'core.middleware.PgHistoryContextMiddleware', - 'analytics.middleware.PageViewMiddleware', - 'django_htmx.middleware.HtmxMiddleware', - # ... standard Django middleware -] -``` - -### 3. **Context Processors** -```python -TEMPLATES = [{ - 'OPTIONS': { - 'context_processors': [ - 'moderation.context_processors.moderation_access', - # ... standard processors - ] - } -}] -``` - -### 4. **Custom Management Commands** -- Data import/export utilities -- Maintenance and cleanup scripts -- Analytics processing -- Content moderation helpers - -## Static Assets & Frontend - -### CSS Architecture -- **Tailwind CSS** utility-first approach -- Custom CSS in `static/css/src/` -- Component-specific styles -- Dark mode support - -### JavaScript Strategy -- **Minimal custom JavaScript** -- **HTMX** for dynamic interactions -- **Alpine.js** for UI components -- Progressive enhancement approach - -### Media Organization -``` -media/ -├── avatars/ - User profile pictures -├── park/[slug]/ - Park-specific photos -├── ride/[slug]/ - Ride-specific photos -└── submissions/ - User-uploaded content -``` - -## Performance & Optimization - -### Database Optimization -- Proper indexing on frequently queried fields -- `select_related()` and `prefetch_related()` usage -- Generic foreign key indexing -- PostGIS spatial indexing - -### Caching Strategy -- Basic Django cache framework -- Trending content caching -- Static file optimization via WhiteNoise -- HTMX partial caching - -### Geographic Performance -- PostGIS Point fields for efficient spatial queries -- Distance calculations and nearby location queries -- Legacy coordinate support during migration - -## Security Implementation - -### Authentication Security -- Role-based access control (USER, MODERATOR, ADMIN, SUPERUSER) -- Social login with OAuth2 -- Turnstile CAPTCHA protection -- Email verification workflows - -### Data Security -- Django ORM prevents SQL injection -- CSRF protection on all forms -- File upload validation and security -- User input sanitization - -### Authorization Patterns -```python -# Role-based access in views -@user_passes_test(lambda u: u.role in ['MODERATOR', 'ADMIN']) -def moderation_view(request): - # Moderator-only functionality -``` - -## Testing Strategy - -### Test Structure -``` -tests/ -├── e2e/ - Playwright browser tests -├── fixtures/ - Test data fixtures -└── [app]/tests/ - Django unit tests -``` - -### Testing Approach -- **Playwright** for end-to-end browser testing -- **pytest-django** for unit tests -- **Fixture-based** test data management -- **Coverage reporting** for quality assurance - -## Conversion Implications - -This Django implementation presents several key considerations for Symfony conversion: - -### 1. **Entity Framework Mapping** -- Django's ORM patterns → Doctrine ORM -- Generic foreign keys → Polymorphic associations -- PostGIS fields → Geographic types -- History tracking → Event sourcing or audit bundles - -### 2. **Authentication System** -- django-allauth → Symfony Security + OAuth bundles -- Role-based access → Voter system -- Social login → KnpUOAuth2ClientBundle - -### 3. **Frontend Architecture** -- HTMX integration → Symfony UX + Stimulus -- Template system → Twig templates -- Static assets → Webpack Encore - -### 4. **Content Management** -- Django admin → EasyAdmin or Sonata -- Moderation workflow → Custom service layer -- File uploads → VichUploaderBundle - -### 5. **Geographic Features** -- PostGIS → Doctrine DBAL geographic types -- Spatial queries → Custom repository methods - -## Next Steps for Conversion Planning - -1. **Entity Mapping** - Map Django models to Doctrine entities -2. **Bundle Selection** - Choose appropriate Symfony bundles for each feature -3. **Database Migration** - Plan PostgreSQL schema adaptation -4. **Authentication Migration** - Design Symfony Security implementation -5. **Frontend Strategy** - Plan Twig + Stimulus architecture -6. **Testing Migration** - Adapt test suite to PHPUnit - -## References - -- [`memory-bank/documentation/complete-project-review-2025-01-05.md`](../documentation/complete-project-review-2025-01-05.md) - Complete Django analysis -- [`memory-bank/activeContext.md`](../../activeContext.md) - Current project status -- [`.clinerules`](../../../.clinerules) - Project entity relationship rules - ---- - -**Status:** ✅ **COMPLETED** - Source analysis foundation established -**Next:** Entity mapping and Symfony bundle selection planning \ No newline at end of file diff --git a/memory-bank/projects/django-to-symfony-conversion/02-model-analysis-detailed.md b/memory-bank/projects/django-to-symfony-conversion/02-model-analysis-detailed.md deleted file mode 100644 index 40551cf2..00000000 --- a/memory-bank/projects/django-to-symfony-conversion/02-model-analysis-detailed.md +++ /dev/null @@ -1,519 +0,0 @@ -# Django Model Analysis - Detailed Implementation Patterns - -**Date:** January 7, 2025 -**Analyst:** Roo (Architect Mode) -**Purpose:** Detailed Django model analysis for Symfony Doctrine mapping -**Status:** Complete model pattern documentation - -## Overview - -This document provides detailed analysis of Django model implementations, focusing on patterns, relationships, and features that must be mapped to Symfony Doctrine entities during conversion. - -## Core Entity Models Analysis - -### 1. Park Model - Main Entity - -```python -@pghistory.track() -class Park(TrackedModel): - # Primary Fields - id: int # Auto-generated primary key - name = models.CharField(max_length=255) - slug = models.SlugField(max_length=255, unique=True) - description = models.TextField(blank=True) - - # Status Enumeration - STATUS_CHOICES = [ - ("OPERATING", "Operating"), - ("CLOSED_TEMP", "Temporarily Closed"), - ("CLOSED_PERM", "Permanently Closed"), - ("UNDER_CONSTRUCTION", "Under Construction"), - ("DEMOLISHED", "Demolished"), - ("RELOCATED", "Relocated"), - ] - status = models.CharField(max_length=20, choices=STATUS_CHOICES, default="OPERATING") - - # Temporal Fields - opening_date = models.DateField(null=True, blank=True) - closing_date = models.DateField(null=True, blank=True) - operating_season = models.CharField(max_length=255, blank=True) - - # Numeric Fields - size_acres = models.DecimalField(max_digits=10, decimal_places=2, null=True, blank=True) - - # URL Field - website = models.URLField(blank=True) - - # Statistics (Computed/Cached) - ride_count = models.PositiveIntegerField(default=0) - roller_coaster_count = models.PositiveIntegerField(default=0) - - # Foreign Key Relationships - operator = models.ForeignKey( - Operator, - on_delete=models.CASCADE, - related_name='parks' - ) - property_owner = models.ForeignKey( - PropertyOwner, - on_delete=models.SET_NULL, - null=True, - blank=True, - related_name='owned_parks' - ) - - # Generic Relationships - location = GenericRelation(Location, related_query_name='park') - photos = GenericRelation(Photo, related_query_name='park') - reviews = GenericRelation(Review, related_query_name='park') - - # Metadata - created_at = models.DateTimeField(auto_now_add=True) - updated_at = models.DateTimeField(auto_now=True) -``` - -**Symfony Conversion Notes:** -- Enum status field → DoctrineEnum or string with validation -- Generic relations → Polymorphic associations or separate entity relations -- History tracking → Event sourcing or audit bundle -- Computed fields → Doctrine lifecycle callbacks or cached properties - -### 2. Ride Model - Complex Entity with Specifications - -```python -@pghistory.track() -class Ride(TrackedModel): - # Core Identity - name = models.CharField(max_length=255) - slug = models.SlugField(max_length=255, unique=True) - description = models.TextField(blank=True) - - # Ride Type Enumeration - TYPE_CHOICES = [ - ('RC', 'Roller Coaster'), - ('DR', 'Dark Ride'), - ('FR', 'Flat Ride'), - ('WR', 'Water Ride'), - ('TR', 'Transport Ride'), - ('OT', 'Other'), - ] - ride_type = models.CharField(max_length=2, choices=TYPE_CHOICES) - - # Status with Complex Workflow - STATUS_CHOICES = [ - ('OPERATING', 'Operating'), - ('CLOSED_TEMP', 'Temporarily Closed'), - ('CLOSED_PERM', 'Permanently Closed'), - ('UNDER_CONSTRUCTION', 'Under Construction'), - ('RELOCATED', 'Relocated'), - ('DEMOLISHED', 'Demolished'), - ] - status = models.CharField(max_length=20, choices=STATUS_CHOICES, default='OPERATING') - - # Required Relationship - park = models.ForeignKey(Park, on_delete=models.CASCADE, related_name='rides') - - # Optional Relationships - park_area = models.ForeignKey( - 'ParkArea', - on_delete=models.SET_NULL, - null=True, - blank=True, - related_name='rides' - ) - manufacturer = models.ForeignKey( - Manufacturer, - on_delete=models.SET_NULL, - null=True, - blank=True, - related_name='manufactured_rides' - ) - designer = models.ForeignKey( - Designer, - on_delete=models.SET_NULL, - null=True, - blank=True, - related_name='designed_rides' - ) - ride_model = models.ForeignKey( - 'RideModel', - on_delete=models.SET_NULL, - null=True, - blank=True, - related_name='installations' - ) - - # Temporal Data - opening_date = models.DateField(null=True, blank=True) - closing_date = models.DateField(null=True, blank=True) - - # Generic Relationships - photos = GenericRelation(Photo, related_query_name='ride') - reviews = GenericRelation(Review, related_query_name='ride') - - # One-to-One Extensions - # Note: RollerCoasterStats as separate model with OneToOne relationship -``` - -**Symfony Conversion Notes:** -- Multiple optional foreign keys → Nullable Doctrine associations -- Generic relations → Polymorphic or separate photo/review entities -- Complex status workflow → State pattern or enum with validation -- One-to-one extensions → Doctrine inheritance or separate entities - -### 3. User Model - Extended Authentication - -```python -class User(AbstractUser): - # Role-Based Access Control - ROLE_CHOICES = [ - ('USER', 'User'), - ('MODERATOR', 'Moderator'), - ('ADMIN', 'Admin'), - ('SUPERUSER', 'Superuser'), - ] - role = models.CharField(max_length=20, choices=ROLE_CHOICES, default='USER') - - # Public Identifier (Non-PK) - user_id = models.CharField(max_length=20, unique=True) - - # Profile Extensions - theme_preference = models.CharField( - max_length=10, - choices=[('LIGHT', 'Light'), ('DARK', 'Dark'), ('AUTO', 'Auto')], - default='AUTO' - ) - - # Social Fields - google_id = models.CharField(max_length=255, blank=True) - discord_id = models.CharField(max_length=255, blank=True) - - # Statistics (Cached) - review_count = models.PositiveIntegerField(default=0) - photo_count = models.PositiveIntegerField(default=0) - - # Relationships - # Note: UserProfile as separate model with OneToOne relationship -``` - -**Symfony Conversion Notes:** -- AbstractUser → Symfony UserInterface implementation -- Role choices → Symfony Role hierarchy -- Social authentication → OAuth2 bundle integration -- Cached statistics → Event listeners or message bus updates - -### 4. RollerCoasterStats - Detailed Specifications - -```python -class RollerCoasterStats(models.Model): - # One-to-One with Ride - ride = models.OneToOneField( - Ride, - on_delete=models.CASCADE, - related_name='coaster_stats' - ) - - # Physical Specifications (Metric) - height_ft = models.DecimalField(max_digits=6, decimal_places=2, null=True, blank=True) - height_m = models.DecimalField(max_digits=6, decimal_places=2, null=True, blank=True) - length_ft = models.DecimalField(max_digits=8, decimal_places=2, null=True, blank=True) - length_m = models.DecimalField(max_digits=8, decimal_places=2, null=True, blank=True) - speed_mph = models.DecimalField(max_digits=5, decimal_places=1, null=True, blank=True) - speed_kmh = models.DecimalField(max_digits=5, decimal_places=1, null=True, blank=True) - - # Technical Specifications - inversions = models.PositiveSmallIntegerField(null=True, blank=True) - duration_seconds = models.PositiveIntegerField(null=True, blank=True) - capacity_per_hour = models.PositiveIntegerField(null=True, blank=True) - - # Design Elements - launch_system = models.CharField(max_length=50, blank=True) - track_material = models.CharField(max_length=30, blank=True) - - # Restrictions - height_requirement_in = models.PositiveSmallIntegerField(null=True, blank=True) - height_requirement_cm = models.PositiveSmallIntegerField(null=True, blank=True) -``` - -**Symfony Conversion Notes:** -- OneToOne relationship → Doctrine OneToOne or embedded value objects -- Dual unit measurements → Value objects with conversion methods -- Optional numeric fields → Nullable types with validation -- Technical specifications → Embedded value objects or separate specification entity - -## Generic Relationship Patterns - -### Generic Foreign Key Implementation - -```python -class Photo(models.Model): - # Generic relationship to any model - content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE) - object_id = models.PositiveIntegerField() - content_object = GenericForeignKey('content_type', 'object_id') - - # Photo-specific fields - image = models.ImageField(upload_to='photos/%Y/%m/%d/') - caption = models.CharField(max_length=255, blank=True) - credit = models.CharField(max_length=100, blank=True) - - # Approval workflow - APPROVAL_CHOICES = [ - ('PENDING', 'Pending Review'), - ('APPROVED', 'Approved'), - ('REJECTED', 'Rejected'), - ] - approval_status = models.CharField( - max_length=10, - choices=APPROVAL_CHOICES, - default='PENDING' - ) - - # Metadata - exif_data = models.JSONField(default=dict, blank=True) - file_size = models.PositiveIntegerField(null=True, blank=True) - uploaded_by = models.ForeignKey(User, on_delete=models.CASCADE) - uploaded_at = models.DateTimeField(auto_now_add=True) -``` - -**Symfony Conversion Options:** -1. **Polymorphic Associations** - Use Doctrine inheritance mapping -2. **Interface-based** - Create PhotoableInterface and separate photo entities -3. **Union Types** - Use discriminator mapping with specific photo types - -### Review System with Generic Relations - -```python -class Review(models.Model): - # Generic relationship - content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE) - object_id = models.PositiveIntegerField() - content_object = GenericForeignKey('content_type', 'object_id') - - # Review content - title = models.CharField(max_length=255) - content = models.TextField() - rating = models.PositiveSmallIntegerField( - validators=[MinValueValidator(1), MaxValueValidator(10)] - ) - - # Metadata - author = models.ForeignKey(User, on_delete=models.CASCADE) - created_at = models.DateTimeField(auto_now_add=True) - updated_at = models.DateTimeField(auto_now=True) - - # Engagement - likes = models.ManyToManyField(User, through='ReviewLike', related_name='liked_reviews') - - # Moderation - is_approved = models.BooleanField(default=False) - moderated_by = models.ForeignKey( - User, - on_delete=models.SET_NULL, - null=True, - blank=True, - related_name='moderated_reviews' - ) -``` - -**Symfony Conversion Notes:** -- Generic reviews → Separate ParkReview, RideReview entities or polymorphic mapping -- Many-to-many through model → Doctrine association entities -- Rating validation → Symfony validation constraints -- Moderation fields → Workflow component or state machine - -## Location and Geographic Data - -### PostGIS Integration - -```python -class Location(models.Model): - # Generic relationship to any model - content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE) - object_id = models.PositiveIntegerField() - content_object = GenericForeignKey('content_type', 'object_id') - - # Geographic data (PostGIS) - location = models.PointField(geography=True, null=True, blank=True) - - # Legacy coordinate support - coordinates = models.JSONField(default=dict, blank=True) - latitude = models.DecimalField(max_digits=10, decimal_places=8, null=True, blank=True) - longitude = models.DecimalField(max_digits=11, decimal_places=8, null=True, blank=True) - - # Address components - address_line_1 = models.CharField(max_length=255, blank=True) - address_line_2 = models.CharField(max_length=255, blank=True) - city = models.CharField(max_length=100, blank=True) - state_province = models.CharField(max_length=100, blank=True) - postal_code = models.CharField(max_length=20, blank=True) - country = models.CharField(max_length=2, blank=True) # ISO country code - - # Metadata - created_at = models.DateTimeField(auto_now_add=True) - updated_at = models.DateTimeField(auto_now=True) -``` - -**Symfony Conversion Notes:** -- PostGIS Point field → Doctrine DBAL geographic types or custom mapping -- Generic location → Polymorphic or interface-based approach -- Address components → Value objects or embedded entities -- Coordinate legacy support → Migration strategy during conversion - -## History Tracking Implementation - -### TrackedModel Base Class - -```python -@pghistory.track() -class TrackedModel(models.Model): - """Base model with automatic history tracking""" - - class Meta: - abstract = True - - # Automatic fields - created_at = models.DateTimeField(auto_now_add=True) - updated_at = models.DateTimeField(auto_now=True) - - # Slug management - slug = models.SlugField(max_length=255, unique=True) - - def save(self, *args, **kwargs): - # Auto-generate slug if not provided - if not self.slug: - self.slug = slugify(self.name) - super().save(*args, **kwargs) -``` - -### PgHistory Event Tracking - -```python -# Automatic event models created by pghistory -# Example for Park model: -class ParkEvent(models.Model): - """Auto-generated history table""" - - # All fields from original Park model - # Plus: - pgh_created_at = models.DateTimeField() - pgh_label = models.CharField(max_length=100) # Event type - pgh_id = models.AutoField(primary_key=True) - pgh_obj = models.ForeignKey(Park, on_delete=models.CASCADE) - - # Context fields (from middleware) - pgh_context = models.JSONField(default=dict) -``` - -**Symfony Conversion Notes:** -- History tracking → Doctrine Extensions Loggable or custom event sourcing -- Auto-timestamps → Doctrine lifecycle callbacks -- Slug generation → Symfony String component with event listeners -- Context tracking → Event dispatcher with context gathering - -## Moderation System Models - -### Content Submission Workflow - -```python -class EditSubmission(models.Model): - """User-submitted edits for approval""" - - STATUS_CHOICES = [ - ('PENDING', 'Pending Review'), - ('APPROVED', 'Approved'), - ('REJECTED', 'Rejected'), - ('ESCALATED', 'Escalated'), - ] - status = models.CharField(max_length=10, choices=STATUS_CHOICES, default='PENDING') - - # Submission content - content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE) - object_id = models.PositiveIntegerField(null=True, blank=True) # Null for new objects - - # Change data (JSON) - submitted_data = models.JSONField() - current_data = models.JSONField(default=dict, blank=True) - - # Workflow fields - submitted_by = models.ForeignKey(User, on_delete=models.CASCADE) - submitted_at = models.DateTimeField(auto_now_add=True) - - reviewed_by = models.ForeignKey( - User, - on_delete=models.SET_NULL, - null=True, - blank=True, - related_name='reviewed_submissions' - ) - reviewed_at = models.DateTimeField(null=True, blank=True) - - # Review notes - review_notes = models.TextField(blank=True) - - # Auto-approval logic - auto_approved = models.BooleanField(default=False) -``` - -**Symfony Conversion Notes:** -- Status workflow → Symfony Workflow component -- JSON change data → Doctrine JSON type with validation -- Generic content reference → Polymorphic approach or interface -- Auto-approval → Event system with rule engine - -## Conversion Mapping Summary - -### Model → Entity Mapping Strategy - -| Django Pattern | Symfony Approach | -|----------------|------------------| -| `models.Model` | Doctrine Entity | -| `AbstractUser` | User implementing UserInterface | -| `GenericForeignKey` | Polymorphic associations or interfaces | -| `@pghistory.track()` | Event sourcing or audit bundle | -| `choices=CHOICES` | Enums with validation | -| `JSONField` | Doctrine JSON type | -| `models.PointField` | Custom geographic type | -| `auto_now_add=True` | Doctrine lifecycle callbacks | -| `GenericRelation` | Separate entity relationships | -| `Through` models | Association entities | - -### Key Conversion Considerations - -1. **Generic Relations** - Most complex conversion aspect - - Option A: Polymorphic inheritance mapping - - Option B: Interface-based approach with separate entities - - Option C: Discriminator mapping with union types - -2. **History Tracking** - Choose appropriate strategy - - Event sourcing for full audit trails - - Doctrine Extensions for simple logging - - Custom audit bundle for workflow tracking - -3. **Geographic Data** - PostGIS equivalent - - Doctrine DBAL geographic extensions - - Custom types for Point/Polygon fields - - Migration strategy for existing coordinates - -4. **Validation** - Move from Django to Symfony - - Model choices → Symfony validation constraints - - Custom validators → Constraint classes - - Form validation → Symfony Form component - -5. **Relationships** - Preserve data integrity - - Maintain all foreign key constraints - - Convert cascade behaviors appropriately - - Handle nullable relationships correctly - -## Next Steps - -1. **Entity Design** - Create Doctrine entity classes for each Django model -2. **Association Mapping** - Design polymorphic strategies for generic relations -3. **Value Objects** - Extract embedded data into value objects -4. **Migration Scripts** - Plan database schema migration from Django to Symfony -5. **Repository Patterns** - Convert Django QuerySets to Doctrine repositories - ---- - -**Status:** ✅ **COMPLETED** - Detailed model analysis for Symfony conversion -**Next:** Symfony entity design and mapping strategy \ No newline at end of file diff --git a/memory-bank/projects/django-to-symfony-conversion/03-view-controller-analysis.md b/memory-bank/projects/django-to-symfony-conversion/03-view-controller-analysis.md deleted file mode 100644 index 8ee56b11..00000000 --- a/memory-bank/projects/django-to-symfony-conversion/03-view-controller-analysis.md +++ /dev/null @@ -1,559 +0,0 @@ -# Django Views & URL Analysis - Controller Pattern Mapping - -**Date:** January 7, 2025 -**Analyst:** Roo (Architect Mode) -**Purpose:** Django view/URL pattern analysis for Symfony controller conversion -**Status:** Complete view layer analysis for conversion planning - -## Overview - -This document analyzes Django view patterns, URL routing, and controller logic to facilitate conversion to Symfony's controller and routing system. Focus on HTMX integration, authentication patterns, and RESTful designs. - -## Django View Architecture Analysis - -### View Types and Patterns - -#### 1. Function-Based Views (FBV) -```python -# Example: Search functionality -def search_view(request): - query = request.GET.get('q', '') - - if request.htmx: - # Return HTMX partial - return render(request, 'search/partials/results.html', { - 'results': search_results, - 'query': query - }) - - # Return full page - return render(request, 'search/index.html', { - 'results': search_results, - 'query': query - }) -``` - -#### 2. Class-Based Views (CBV) -```python -# Example: Park detail view -class ParkDetailView(DetailView): - model = Park - template_name = 'parks/detail.html' - context_object_name = 'park' - - def get_context_data(self, **kwargs): - context = super().get_context_data(**kwargs) - context['rides'] = self.object.rides.filter(status='OPERATING') - context['photos'] = self.object.photos.filter(approval_status='APPROVED') - context['reviews'] = self.object.reviews.filter(is_approved=True)[:5] - return context -``` - -#### 3. HTMX-Enhanced Views -```python -# Example: Autocomplete endpoint -def park_autocomplete(request): - query = request.GET.get('q', '') - - if not request.htmx: - return JsonResponse({'error': 'HTMX required'}, status=400) - - parks = Park.objects.filter( - name__icontains=query - ).select_related('operator')[:10] - - return render(request, 'parks/partials/autocomplete.html', { - 'parks': parks, - 'query': query - }) -``` - -### Authentication & Authorization Patterns - -#### 1. Decorator-Based Protection -```python -from django.contrib.auth.decorators import login_required, user_passes_test - -@login_required -def submit_review(request, park_id): - # Review submission logic - pass - -@user_passes_test(lambda u: u.role in ['MODERATOR', 'ADMIN']) -def moderation_dashboard(request): - # Moderation interface - pass -``` - -#### 2. Permission Checks in Views -```python -class ParkEditView(UpdateView): - model = Park - - def dispatch(self, request, *args, **kwargs): - if not request.user.is_authenticated: - return redirect('login') - - if request.user.role not in ['MODERATOR', 'ADMIN']: - raise PermissionDenied - - return super().dispatch(request, *args, **kwargs) -``` - -#### 3. Context-Based Permissions -```python -def park_detail(request, slug): - park = get_object_or_404(Park, slug=slug) - - context = { - 'park': park, - 'can_edit': request.user.is_authenticated and - request.user.role in ['MODERATOR', 'ADMIN'], - 'can_review': request.user.is_authenticated, - 'can_upload': request.user.is_authenticated, - } - - return render(request, 'parks/detail.html', context) -``` - -## URL Routing Analysis - -### Main URL Structure -```python -# thrillwiki/urls.py -urlpatterns = [ - path('admin/', admin.site.urls), - path('', HomeView.as_view(), name='home'), - path('parks/', include('parks.urls')), - path('rides/', include('rides.urls')), - path('operators/', include('operators.urls')), - path('manufacturers/', include('manufacturers.urls')), - path('designers/', include('designers.urls')), - path('property-owners/', include('property_owners.urls')), - path('search/', include('search.urls')), - path('accounts/', include('accounts.urls')), - path('ac/', include('autocomplete.urls')), # HTMX autocomplete - path('moderation/', include('moderation.urls')), - path('history/', include('history.urls')), - path('photos/', include('media.urls')), -] -``` - -### App-Specific URL Patterns - -#### Parks URLs -```python -# parks/urls.py -urlpatterns = [ - path('', ParkListView.as_view(), name='park-list'), - path('/', ParkDetailView.as_view(), name='park-detail'), - path('/edit/', ParkEditView.as_view(), name='park-edit'), - path('/photos/', ParkPhotoListView.as_view(), name='park-photos'), - path('/reviews/', ParkReviewListView.as_view(), name='park-reviews'), - path('/rides/', ParkRideListView.as_view(), name='park-rides'), - - # HTMX endpoints - path('/rides/partial/', park_rides_partial, name='park-rides-partial'), - path('/photos/partial/', park_photos_partial, name='park-photos-partial'), -] -``` - -#### Search URLs -```python -# search/urls.py -urlpatterns = [ - path('', SearchView.as_view(), name='search'), - path('suggestions/', search_suggestions, name='search-suggestions'), - path('parks/', park_search, name='park-search'), - path('rides/', ride_search, name='ride-search'), -] -``` - -#### Autocomplete URLs (HTMX) -```python -# autocomplete/urls.py -urlpatterns = [ - path('parks/', park_autocomplete, name='ac-parks'), - path('rides/', ride_autocomplete, name='ac-rides'), - path('operators/', operator_autocomplete, name='ac-operators'), - path('manufacturers/', manufacturer_autocomplete, name='ac-manufacturers'), - path('designers/', designer_autocomplete, name='ac-designers'), -] -``` - -### SEO and Slug Management - -#### Historical Slug Support -```python -# Custom middleware for slug redirects -class SlugRedirectMiddleware: - def __init__(self, get_response): - self.get_response = get_response - - def __call__(self, request): - response = self.get_response(request) - - if response.status_code == 404: - # Check for historical slugs - old_slug = request.path.split('/')[-2] # Extract slug from path - - # Look up in slug history - try: - slug_history = SlugHistory.objects.get(old_slug=old_slug) - new_url = request.path.replace(old_slug, slug_history.current_slug) - return redirect(new_url, permanent=True) - except SlugHistory.DoesNotExist: - pass - - return response -``` - -## Form Handling Patterns - -### Django Form Integration - -#### 1. Model Forms -```python -# forms.py -class ParkForm(forms.ModelForm): - class Meta: - model = Park - fields = ['name', 'description', 'website', 'operator', 'property_owner'] - widgets = { - 'description': forms.Textarea(attrs={'rows': 4}), - 'operator': autocomplete.ModelSelect2(url='ac-operators'), - 'property_owner': autocomplete.ModelSelect2(url='ac-property-owners'), - } - - def clean_name(self): - name = self.cleaned_data['name'] - # Custom validation logic - return name -``` - -#### 2. HTMX Form Processing -```python -def park_form_view(request, slug=None): - park = get_object_or_404(Park, slug=slug) if slug else None - - if request.method == 'POST': - form = ParkForm(request.POST, instance=park) - if form.is_valid(): - park = form.save() - - if request.htmx: - # Return updated partial - return render(request, 'parks/partials/park_card.html', { - 'park': park - }) - - return redirect('park-detail', slug=park.slug) - else: - form = ParkForm(instance=park) - - template = 'parks/partials/form.html' if request.htmx else 'parks/form.html' - return render(request, template, {'form': form, 'park': park}) -``` - -#### 3. File Upload Handling -```python -def photo_upload_view(request): - if request.method == 'POST': - form = PhotoUploadForm(request.POST, request.FILES) - if form.is_valid(): - photo = form.save(commit=False) - photo.uploaded_by = request.user - - # Extract EXIF data - if photo.image: - photo.exif_data = extract_exif_data(photo.image) - - photo.save() - - if request.htmx: - return render(request, 'media/partials/photo_preview.html', { - 'photo': photo - }) - - return redirect('photo-detail', pk=photo.pk) - - return render(request, 'media/upload.html', {'form': form}) -``` - -## API Patterns and JSON Responses - -### HTMX JSON Responses -```python -def search_api(request): - query = request.GET.get('q', '') - - results = { - 'parks': list(Park.objects.filter(name__icontains=query).values('name', 'slug')[:5]), - 'rides': list(Ride.objects.filter(name__icontains=query).values('name', 'slug')[:5]), - } - - return JsonResponse(results) -``` - -### Error Handling -```python -def api_view_with_error_handling(request): - try: - # View logic - return JsonResponse({'success': True, 'data': data}) - except ValidationError as e: - return JsonResponse({'success': False, 'errors': e.message_dict}, status=400) - except PermissionDenied: - return JsonResponse({'success': False, 'error': 'Permission denied'}, status=403) - except Exception as e: - logger.exception('Unexpected error in API view') - return JsonResponse({'success': False, 'error': 'Internal error'}, status=500) -``` - -## Middleware Analysis - -### Custom Middleware Stack -```python -# settings.py -MIDDLEWARE = [ - 'django.middleware.cache.UpdateCacheMiddleware', - 'django.middleware.security.SecurityMiddleware', - 'whitenoise.middleware.WhiteNoiseMiddleware', - 'django.contrib.sessions.middleware.SessionMiddleware', - 'django.middleware.common.CommonMiddleware', - 'django.middleware.csrf.CsrfViewMiddleware', - 'django.contrib.auth.middleware.AuthenticationMiddleware', - 'django.contrib.messages.middleware.MessageMiddleware', - 'django.middleware.clickjacking.XFrameOptionsMiddleware', - 'core.middleware.PgHistoryContextMiddleware', # Custom history context - 'allauth.account.middleware.AccountMiddleware', - 'django.middleware.cache.FetchFromCacheMiddleware', - 'django_htmx.middleware.HtmxMiddleware', # HTMX support - 'analytics.middleware.PageViewMiddleware', # Custom analytics -] -``` - -### Custom Middleware Examples - -#### History Context Middleware -```python -class PgHistoryContextMiddleware: - def __init__(self, get_response): - self.get_response = get_response - - def __call__(self, request): - # Set context for history tracking - with pghistory.context( - user=getattr(request, 'user', None), - ip_address=self.get_client_ip(request), - user_agent=request.META.get('HTTP_USER_AGENT', '') - ): - response = self.get_response(request) - - return response -``` - -#### Page View Tracking Middleware -```python -class PageViewMiddleware: - def __init__(self, get_response): - self.get_response = get_response - - def __call__(self, request): - response = self.get_response(request) - - # Track page views for successful responses - if response.status_code == 200 and not request.htmx: - self.track_page_view(request) - - return response -``` - -## Context Processors - -### Custom Context Processors -```python -# moderation/context_processors.py -def moderation_access(request): - """Add moderation permissions to template context""" - return { - 'can_moderate': ( - request.user.is_authenticated and - request.user.role in ['MODERATOR', 'ADMIN', 'SUPERUSER'] - ), - 'pending_submissions_count': ( - EditSubmission.objects.filter(status='PENDING').count() - if request.user.is_authenticated and request.user.role in ['MODERATOR', 'ADMIN'] - else 0 - ) - } -``` - -## Conversion Mapping to Symfony - -### View → Controller Mapping - -| Django Pattern | Symfony Equivalent | -|----------------|-------------------| -| Function-based views | Controller methods | -| Class-based views | Controller classes | -| `@login_required` | Security annotations | -| `user_passes_test` | Voter system | -| `render()` | `$this->render()` | -| `JsonResponse` | `JsonResponse` | -| `redirect()` | `$this->redirectToRoute()` | -| `get_object_or_404` | Repository + exception | - -### URL → Route Mapping - -| Django Pattern | Symfony Equivalent | -|----------------|-------------------| -| `path('', view)` | `#[Route('/', name: '')]` | -| `` | `{slug}` with requirements | -| `include()` | Route prefixes | -| `name='route-name'` | `name: 'route_name'` | - -### Key Conversion Considerations - -#### 1. HTMX Integration -```yaml -# Symfony equivalent approach -# Route annotations for HTMX endpoints -#[Route('/parks/{slug}/rides', name: 'park_rides')] -#[Route('/parks/{slug}/rides/partial', name: 'park_rides_partial')] -public function parkRides(Request $request, Park $park): Response -{ - $rides = $park->getRides(); - - if ($request->headers->has('HX-Request')) { - return $this->render('parks/partials/rides.html.twig', [ - 'rides' => $rides - ]); - } - - return $this->render('parks/rides.html.twig', [ - 'park' => $park, - 'rides' => $rides - ]); -} -``` - -#### 2. Authentication & Authorization -```php -// Symfony Security approach -#[IsGranted('ROLE_MODERATOR')] -class ModerationController extends AbstractController -{ - #[Route('/moderation/dashboard')] - public function dashboard(): Response - { - // Moderation logic - } -} -``` - -#### 3. Form Handling -```php -// Symfony Form component -#[Route('/parks/{slug}/edit', name: 'park_edit')] -public function edit(Request $request, Park $park, EntityManagerInterface $em): Response -{ - $form = $this->createForm(ParkType::class, $park); - $form->handleRequest($request); - - if ($form->isSubmitted() && $form->isValid()) { - $em->flush(); - - if ($request->headers->has('HX-Request')) { - return $this->render('parks/partials/park_card.html.twig', [ - 'park' => $park - ]); - } - - return $this->redirectToRoute('park_detail', ['slug' => $park->getSlug()]); - } - - $template = $request->headers->has('HX-Request') - ? 'parks/partials/form.html.twig' - : 'parks/form.html.twig'; - - return $this->render($template, [ - 'form' => $form->createView(), - 'park' => $park - ]); -} -``` - -#### 4. Middleware → Event Listeners -```php -// Symfony event listener equivalent -class PageViewListener -{ - public function onKernelResponse(ResponseEvent $event): void - { - $request = $event->getRequest(); - $response = $event->getResponse(); - - if ($response->getStatusCode() === 200 && - !$request->headers->has('HX-Request')) { - $this->trackPageView($request); - } - } -} -``` - -## Template Integration Analysis - -### Django Template Features -```html - -{% extends 'base.html' %} -{% load parks_tags %} - -{% block content %} -
- Loading rides... -
- -{% if user.is_authenticated and can_edit %} - Edit Park -{% endif %} -{% endblock %} -``` - -### Symfony Twig Equivalent -```twig -{# Twig template with HTMX #} -{% extends 'base.html.twig' %} - -{% block content %} -
- Loading rides... -
- -{% if is_granted('ROLE_USER') and can_edit %} - Edit Park -{% endif %} -{% endblock %} -``` - -## Next Steps for Controller Conversion - -1. **Route Definition** - Convert Django URLs to Symfony routes -2. **Controller Classes** - Map views to controller methods -3. **Security Configuration** - Set up Symfony Security for authentication -4. **Form Types** - Convert Django forms to Symfony form types -5. **Event System** - Replace Django middleware with Symfony event listeners -6. **Template Migration** - Convert Django templates to Twig -7. **HTMX Integration** - Ensure seamless HTMX functionality in Symfony - ---- - -**Status:** ✅ **COMPLETED** - View/controller pattern analysis for Symfony conversion -**Next:** Template system analysis and frontend architecture conversion planning \ No newline at end of file diff --git a/memory-bank/projects/django-to-symfony-conversion/04-template-frontend-analysis.md b/memory-bank/projects/django-to-symfony-conversion/04-template-frontend-analysis.md deleted file mode 100644 index a5184dc4..00000000 --- a/memory-bank/projects/django-to-symfony-conversion/04-template-frontend-analysis.md +++ /dev/null @@ -1,946 +0,0 @@ -# Django Template & Frontend Architecture Analysis - -**Date:** January 7, 2025 -**Analyst:** Roo (Architect Mode) -**Purpose:** Django template system and frontend architecture analysis for Symfony conversion -**Status:** Complete frontend layer analysis for conversion planning - -## Overview - -This document analyzes the Django template system, static asset management, HTMX integration, and frontend architecture to facilitate conversion to Symfony's Twig templating system and modern frontend tooling. - -## Template System Architecture - -### Django Template Structure -``` -templates/ -├── base/ -│ ├── base.html # Main layout -│ ├── header.html # Site header -│ ├── footer.html # Site footer -│ └── navigation.html # Main navigation -├── account/ -│ ├── login.html # Authentication -│ ├── signup.html -│ └── partials/ -│ ├── login_form.html # HTMX login modal -│ └── signup_form.html # HTMX signup modal -├── parks/ -│ ├── list.html # Park listing -│ ├── detail.html # Park detail page -│ ├── form.html # Park edit form -│ └── partials/ -│ ├── park_card.html # HTMX park card -│ ├── park_grid.html # HTMX park grid -│ ├── rides_section.html # HTMX rides tab -│ └── photos_section.html # HTMX photos tab -├── rides/ -│ ├── list.html -│ ├── detail.html -│ └── partials/ -│ ├── ride_card.html -│ ├── ride_stats.html -│ └── ride_photos.html -├── search/ -│ ├── index.html -│ ├── results.html -│ └── partials/ -│ ├── suggestions.html # HTMX autocomplete -│ ├── filters.html # HTMX filter controls -│ └── results_grid.html # HTMX results -└── moderation/ - ├── dashboard.html - ├── submissions.html - └── partials/ - ├── submission_card.html - └── approval_form.html -``` - -### Base Template Analysis - -#### Main Layout Template -```html - - - - - - - {% block title %}ThrillWiki{% endblock %} - - - - - - - - - - - - - - - - - - - - - {% block extra_head %}{% endblock %} - - - - {% include 'base/navigation.html' %} - - -
- - {% if messages %} -
- {% for message in messages %} -
- {{ message }} - -
- {% endfor %} -
- {% endif %} - - {% block content %}{% endblock %} -
- - - {% include 'base/footer.html' %} - - - - - {% block extra_scripts %}{% endblock %} - - -``` - -#### Navigation Component -```html - - -``` - -### HTMX Integration Patterns - -#### Autocomplete Component -```html - -
- {% if results.parks or results.rides %} - {% if results.parks %} -
-
Parks
- {% for park in results.parks %} - -
{{ park.name }}
-
- {{ park.operator.name }} • {{ park.status|title }} -
-
- {% endfor %} -
- {% endif %} - - {% if results.rides %} -
-
Rides
- {% for ride in results.rides %} - -
{{ ride.name }}
-
- {{ ride.park.name }} • {{ ride.get_ride_type_display }} -
-
- {% endfor %} -
- {% endif %} - {% else %} -
- No results found for "{{ query }}" -
- {% endif %} -
-``` - -#### Dynamic Content Loading -```html - -
-
-

Rides ({{ rides.count }})

- {% if can_edit %} - - {% endif %} -
- - -
- - -
-
- - - - - -
-
-
- - -
- {% for ride in rides %} - {% include 'rides/partials/ride_card.html' with ride=ride %} - {% endfor %} -
- - - {% if has_next_page %} -
- -
- {% endif %} -
- - -
-``` - -### Form Integration with HTMX - -#### Dynamic Form Handling -```html - -
-
-
-

- {% if park %}Edit Park{% else %}Add Park{% endif %} -

- -
- -
- {% csrf_token %} - - -
- - {{ form.name }} - {% if form.name.errors %} -
- {{ form.name.errors.0 }} -
- {% endif %} -
- - -
- - {{ form.description }} - {% if form.description.errors %} -
- {{ form.description.errors.0 }} -
- {% endif %} -
- - -
- - -
- {{ form.operator.as_hidden }} - {% if form.operator.errors %} -
- {{ form.operator.errors.0 }} -
- {% endif %} -
- - -
- - -
-
-
-
-``` - -## Static Asset Management - -### Tailwind CSS Configuration -```javascript -// tailwind.config.js -module.exports = { - content: [ - './templates/**/*.html', - './*/templates/**/*.html', - './static/js/**/*.js', - ], - darkMode: 'class', - theme: { - extend: { - colors: { - primary: { - 50: '#eff6ff', - 500: '#3b82f6', - 600: '#2563eb', - 700: '#1d4ed8', - 900: '#1e3a8a', - } - }, - fontFamily: { - sans: ['Inter', 'system-ui', 'sans-serif'], - }, - animation: { - 'fade-in': 'fadeIn 0.3s ease-in-out', - 'slide-up': 'slideUp 0.3s ease-out', - }, - keyframes: { - fadeIn: { - '0%': { opacity: '0' }, - '100%': { opacity: '1' }, - }, - slideUp: { - '0%': { transform: 'translateY(10px)', opacity: '0' }, - '100%': { transform: 'translateY(0)', opacity: '1' }, - }, - } - }, - }, - plugins: [ - require('@tailwindcss/forms'), - require('@tailwindcss/typography'), - ], -} -``` - -### Static Files Structure -``` -static/ -├── css/ -│ ├── src/ -│ │ ├── main.css # Tailwind source -│ │ ├── components.css # Custom components -│ │ └── utilities.css # Custom utilities -│ └── styles.css # Compiled output -├── js/ -│ ├── main.js # Main JavaScript -│ ├── components/ -│ │ ├── autocomplete.js # Autocomplete functionality -│ │ ├── modal.js # Modal management -│ │ └── theme-toggle.js # Dark mode toggle -│ └── vendor/ -│ ├── htmx.min.js # HTMX library -│ └── alpine.min.js # Alpine.js library -└── images/ - ├── placeholders/ - │ ├── park-placeholder.jpg - │ └── ride-placeholder.jpg - └── icons/ - ├── logo.svg - └── social-icons/ -``` - -### Custom CSS Components -```css -/* static/css/src/components.css */ -@layer components { - .btn { - @apply px-4 py-2 rounded-lg font-medium transition-colors focus:outline-none focus:ring-2; - } - - .btn-primary { - @apply btn bg-blue-600 text-white hover:bg-blue-700 focus:ring-blue-500; - } - - .btn-secondary { - @apply btn bg-gray-600 text-white hover:bg-gray-700 focus:ring-gray-500; - } - - .card { - @apply bg-white dark:bg-gray-800 rounded-lg shadow-md border border-gray-200 dark:border-gray-700; - } - - .card-header { - @apply px-6 py-4 border-b border-gray-200 dark:border-gray-700; - } - - .card-body { - @apply px-6 py-4; - } - - .form-input { - @apply w-full px-3 py-2 border border-gray-300 dark:border-gray-600 rounded-lg focus:ring-2 focus:ring-blue-500 dark:bg-gray-700 dark:text-gray-100; - } - - .alert { - @apply px-4 py-3 rounded-lg border; - } - - .alert-success { - @apply alert bg-green-50 border-green-200 text-green-800 dark:bg-green-900 dark:border-green-700 dark:text-green-200; - } - - .alert-error { - @apply alert bg-red-50 border-red-200 text-red-800 dark:bg-red-900 dark:border-red-700 dark:text-red-200; - } - - .htmx-indicator { - @apply opacity-0 transition-opacity; - } - - .htmx-request .htmx-indicator { - @apply opacity-100; - } - - .htmx-request.htmx-indicator { - @apply opacity-100; - } -} -``` - -## JavaScript Architecture - -### HTMX Configuration -```javascript -// static/js/main.js -document.addEventListener('DOMContentLoaded', function() { - // HTMX Global Configuration - htmx.config.defaultSwapStyle = 'innerHTML'; - htmx.config.scrollBehavior = 'smooth'; - htmx.config.requestClass = 'htmx-request'; - htmx.config.addedClass = 'htmx-added'; - htmx.config.settledClass = 'htmx-settled'; - - // Global HTMX event handlers - document.body.addEventListener('htmx:configRequest', function(evt) { - evt.detail.headers['X-CSRFToken'] = getCSRFToken(); - evt.detail.headers['X-Requested-With'] = 'XMLHttpRequest'; - }); - - document.body.addEventListener('htmx:beforeSwap', function(evt) { - // Handle error responses - if (evt.detail.xhr.status === 400) { - // Keep form visible to show validation errors - evt.detail.shouldSwap = true; - } else if (evt.detail.xhr.status === 403) { - // Show permission denied message - showAlert('Permission denied', 'error'); - evt.detail.shouldSwap = false; - } else if (evt.detail.xhr.status >= 500) { - // Show server error message - showAlert('Server error occurred', 'error'); - evt.detail.shouldSwap = false; - } - }); - - document.body.addEventListener('htmx:afterSwap', function(evt) { - // Re-initialize any JavaScript components in swapped content - initializeComponents(evt.detail.target); - }); - - // Initialize components on page load - initializeComponents(document); -}); - -function getCSRFToken() { - return document.querySelector('[name=csrfmiddlewaretoken]')?.value || - document.querySelector('meta[name=csrf-token]')?.getAttribute('content'); -} - -function initializeComponents(container) { - // Initialize any JavaScript components that need setup - container.querySelectorAll('[data-component]').forEach(el => { - const component = el.dataset.component; - if (window.components && window.components[component]) { - window.components[component](el); - } - }); -} - -function showAlert(message, type = 'info') { - const alertContainer = document.getElementById('messages') || createAlertContainer(); - const alert = document.createElement('div'); - alert.className = `alert alert-${type} mb-2 animate-fade-in`; - alert.innerHTML = ` - ${message} - - `; - alertContainer.appendChild(alert); - - // Auto-remove after 5 seconds - setTimeout(() => { - if (alert.parentElement) { - alert.remove(); - } - }, 5000); -} -``` - -### Component System -```javascript -// static/js/components/autocomplete.js -window.components = window.components || {}; - -window.components.autocomplete = function(element) { - const input = element.querySelector('input'); - const resultsContainer = element.querySelector('.autocomplete-results'); - let currentFocus = -1; - - input.addEventListener('keydown', function(e) { - const items = resultsContainer.querySelectorAll('.autocomplete-item'); - - if (e.key === 'ArrowDown') { - e.preventDefault(); - currentFocus = Math.min(currentFocus + 1, items.length - 1); - updateActiveItem(items); - } else if (e.key === 'ArrowUp') { - e.preventDefault(); - currentFocus = Math.max(currentFocus - 1, -1); - updateActiveItem(items); - } else if (e.key === 'Enter') { - e.preventDefault(); - if (currentFocus >= 0 && items[currentFocus]) { - items[currentFocus].click(); - } - } else if (e.key === 'Escape') { - resultsContainer.innerHTML = ''; - currentFocus = -1; - } - }); - - function updateActiveItem(items) { - items.forEach((item, index) => { - item.classList.toggle('bg-blue-50', index === currentFocus); - }); - } -}; -``` - -## Template Tags and Filters - -### Custom Template Tags -```python -# parks/templatetags/parks_tags.py -from django import template -from django.utils.html import format_html -from django.urls import reverse - -register = template.Library() - -@register.simple_tag -def ride_type_icon(ride_type): - """Return icon class for ride type""" - icons = { - 'RC': 'fas fa-roller-coaster', - 'DR': 'fas fa-ghost', - 'FR': 'fas fa-circle', - 'WR': 'fas fa-water', - 'TR': 'fas fa-train', - 'OT': 'fas fa-star', - } - return icons.get(ride_type, 'fas fa-question') - -@register.simple_tag -def status_badge(status): - """Return colored badge for status""" - colors = { - 'OPERATING': 'bg-green-100 text-green-800', - 'CLOSED_TEMP': 'bg-yellow-100 text-yellow-800', - 'CLOSED_PERM': 'bg-red-100 text-red-800', - 'UNDER_CONSTRUCTION': 'bg-blue-100 text-blue-800', - 'DEMOLISHED': 'bg-gray-100 text-gray-800', - 'RELOCATED': 'bg-purple-100 text-purple-800', - } - color_class = colors.get(status, 'bg-gray-100 text-gray-800') - display_text = status.replace('_', ' ').title() - - return format_html( - '{}', - color_class, - display_text - ) - -@register.inclusion_tag('parks/partials/ride_card.html') -def ride_card(ride, show_park=False): - """Render a ride card component""" - return { - 'ride': ride, - 'show_park': show_park, - } - -@register.filter -def duration_format(seconds): - """Format duration in seconds to human readable""" - if not seconds: - return '' - - minutes = seconds // 60 - remaining_seconds = seconds % 60 - - if minutes > 0: - return f"{minutes}:{remaining_seconds:02d}" - else: - return f"{seconds}s" -``` - -## Conversion to Symfony Twig - -### Template Structure Mapping - -| Django Template | Symfony Twig Equivalent | -|----------------|-------------------------| -| `templates/base/base.html` | `templates/base.html.twig` | -| `{% extends 'base.html' %}` | `{% extends 'base.html.twig' %}` | -| `{% block content %}` | `{% block content %}` | -| `{% include 'partial.html' %}` | `{% include 'partial.html.twig' %}` | -| `{% url 'route-name' %}` | `{{ path('route_name') }}` | -| `{% static 'file.css' %}` | `{{ asset('file.css') }}` | -| `{% csrf_token %}` | `{{ csrf_token() }}` | -| `{% if user.is_authenticated %}` | `{% if is_granted('ROLE_USER') %}` | - -### Twig Template Example -```twig -{# templates/parks/detail.html.twig #} -{% extends 'base.html.twig' %} - -{% block title %}{{ park.name }} - ThrillWiki{% endblock %} - -{% block content %} -
-
- -
-
-
-
-
-

{{ park.name }}

-

- Operated by - - {{ park.operator.name }} - -

-
- {{ status_badge(park.status) }} -
-
- -
- {% if park.description %} -

- {{ park.description }} -

- {% endif %} - - -
-
- -
- - -
-
- Loading rides... -
- -
- Loading photos... -
- -
- Loading reviews... -
-
-
-
-
-
- - -
- {% include 'parks/partials/park_info.html.twig' %} - {% include 'parks/partials/park_stats.html.twig' %} -
-
-
-{% endblock %} -``` - -## Asset Management Migration - -### Symfony Asset Strategy -```yaml -# webpack.config.js (Symfony Webpack Encore) -const Encore = require('@symfony/webpack-encore'); - -Encore - .setOutputPath('public/build/') - .setPublicPath('/build') - .addEntry('app', './assets/app.js') - .addEntry('admin', './assets/admin.js') - .addStyleEntry('styles', './assets/styles/app.css') - - // Enable PostCSS for Tailwind - .enablePostCssLoader() - - // Enable source maps in dev - .enableSourceMaps(!Encore.isProduction()) - - // Enable versioning in production - .enableVersioning(Encore.isProduction()) - - // Configure Babel - .configureBabelPresetEnv((config) => { - config.useBuiltIns = 'usage'; - config.corejs = 3; - }) - - // Copy static assets - .copyFiles({ - from: './assets/images', - to: 'images/[path][name].[hash:8].[ext]' - }); - -module.exports = Encore.getWebpackConfig(); -``` - -## Next Steps for Frontend Conversion - -1. **Template Migration** - Convert Django templates to Twig syntax -2. **Asset Pipeline** - Set up Symfony Webpack Encore with Tailwind -3. **HTMX Integration** - Ensure HTMX works with Symfony controllers -4. **Component System** - Migrate JavaScript components to work with Twig -5. **Styling Migration** - Adapt Tailwind configuration for Symfony structure -6. **Template Functions** - Create Twig extensions for custom template tags -7. **Form Theming** - Set up Symfony form themes to match current styling - ---- - -**Status:** ✅ **COMPLETED** - Frontend architecture analysis for Symfony conversion -**Next:** Database schema analysis and migration planning \ No newline at end of file diff --git a/memory-bank/projects/django-to-symfony-conversion/05-conversion-strategy-summary.md b/memory-bank/projects/django-to-symfony-conversion/05-conversion-strategy-summary.md deleted file mode 100644 index 244e0daa..00000000 --- a/memory-bank/projects/django-to-symfony-conversion/05-conversion-strategy-summary.md +++ /dev/null @@ -1,521 +0,0 @@ -# Django to Symfony Conversion Strategy Summary - -**Date:** January 7, 2025 -**Analyst:** Roo (Architect Mode) -**Purpose:** Comprehensive conversion strategy and challenge analysis -**Status:** Complete source analysis - Ready for Symfony implementation planning - -## Executive Summary - -This document synthesizes the complete Django ThrillWiki analysis into a strategic conversion plan for Symfony. Based on detailed analysis of models, views, templates, and architecture, this document identifies key challenges, conversion strategies, and implementation priorities. - -## Conversion Complexity Assessment - -### High Complexity Areas (Significant Symfony Architecture Changes) - -#### 1. **Generic Foreign Key System** 🔴 **CRITICAL** -**Challenge:** Django's `GenericForeignKey` extensively used for Photos, Reviews, Locations -```python -# Django Pattern -content_type = models.ForeignKey(ContentType) -object_id = models.PositiveIntegerField() -content_object = GenericForeignKey('content_type', 'object_id') -``` - -**Symfony Solutions:** -- **Option A:** Polymorphic inheritance mapping with discriminator -- **Option B:** Interface-based approach with separate entities -- **Option C:** Union types with service layer abstraction - -**Recommendation:** Interface-based approach for maintainability - -#### 2. **History Tracking System** 🔴 **CRITICAL** -**Challenge:** `@pghistory.track()` provides automatic comprehensive history tracking -```python -@pghistory.track() -class Park(TrackedModel): - # Automatic history for all changes -``` - -**Symfony Solutions:** -- **Option A:** Doctrine Extensions Loggable behavior -- **Option B:** Custom event sourcing implementation -- **Option C:** Third-party audit bundle (DataDog/Audit) - -**Recommendation:** Doctrine Extensions + custom event sourcing for critical entities - -#### 3. **PostGIS Geographic Integration** 🟡 **MODERATE** -**Challenge:** PostGIS `PointField` and spatial queries -```python -location = models.PointField(geography=True, null=True, blank=True) -``` - -**Symfony Solutions:** -- **Doctrine DBAL** geographic types -- **CrEOF Spatial** library for geographic operations -- **Custom repository methods** for spatial queries - -### Medium Complexity Areas (Direct Mapping Possible) - -#### 4. **Authentication & Authorization** 🟡 **MODERATE** -**Django Pattern:** -```python -@user_passes_test(lambda u: u.role in ['MODERATOR', 'ADMIN']) -def moderation_view(request): - pass -``` - -**Symfony Equivalent:** -```php -#[IsGranted('ROLE_MODERATOR')] -public function moderationView(): Response -{ - // Implementation -} -``` - -#### 5. **Form System** 🟡 **MODERATE** -**Django ModelForm → Symfony FormType** -- Direct field mapping possible -- Validation rules transfer -- HTMX integration maintained - -#### 6. **URL Routing** 🟢 **LOW** -**Django URLs → Symfony Routes** -- Straightforward annotation conversion -- Parameter types easily mapped -- Route naming conventions align - -### Low Complexity Areas (Straightforward Migration) - -#### 7. **Template System** 🟢 **LOW** -**Django Templates → Twig Templates** -- Syntax mostly compatible -- Block structure identical -- Template inheritance preserved - -#### 8. **Static Asset Management** 🟢 **LOW** -**Django Static Files → Symfony Webpack Encore** -- Tailwind CSS configuration transfers -- JavaScript bundling improved -- Asset versioning enhanced - -## Conversion Strategy by Layer - -### 1. Database Layer Strategy - -#### Phase 1: Schema Preparation -```sql --- Maintain existing PostgreSQL schema --- Add Symfony-specific tables -CREATE TABLE doctrine_migration_versions ( - version VARCHAR(191) NOT NULL, - executed_at DATETIME DEFAULT NULL, - execution_time INT DEFAULT NULL -); - --- Add entity inheritance tables if using polymorphic approach -CREATE TABLE photo_type ( - id SERIAL PRIMARY KEY, - type VARCHAR(50) NOT NULL -); -``` - -#### Phase 2: Data Migration Scripts -```php -// Symfony Migration -public function up(Schema $schema): void -{ - // Migrate GenericForeignKey data to polymorphic structure - $this->addSql('ALTER TABLE photo ADD discriminator VARCHAR(50)'); - $this->addSql('UPDATE photo SET discriminator = \'park\' WHERE content_type_id = ?', [$parkContentTypeId]); -} -``` - -### 2. Entity Layer Strategy - -#### Core Entity Conversion Pattern -```php -// Symfony Entity equivalent to Django Park model -#[ORM\Entity(repositoryClass: ParkRepository::class)] -#[ORM\HasLifecycleCallbacks] -#[Gedmo\Loggable] -class Park -{ - #[ORM\Id] - #[ORM\GeneratedValue] - #[ORM\Column] - private ?int $id = null; - - #[ORM\Column(length: 255)] - #[Gedmo\Versioned] - private ?string $name = null; - - #[ORM\Column(length: 255, unique: true)] - #[Gedmo\Slug(fields: ['name'])] - private ?string $slug = null; - - #[ORM\Column(type: Types::TEXT, nullable: true)] - #[Gedmo\Versioned] - private ?string $description = null; - - #[ORM\Column(type: 'park_status', enumType: ParkStatus::class)] - #[Gedmo\Versioned] - private ParkStatus $status = ParkStatus::OPERATING; - - #[ORM\ManyToOne(targetEntity: Operator::class)] - #[ORM\JoinColumn(nullable: false)] - private ?Operator $operator = null; - - #[ORM\ManyToOne(targetEntity: PropertyOwner::class)] - #[ORM\JoinColumn(nullable: true)] - private ?PropertyOwner $propertyOwner = null; - - // Geographic data using CrEOF Spatial - #[ORM\Column(type: 'point', nullable: true)] - private ?Point $location = null; - - // Relationships using interface approach - #[ORM\OneToMany(mappedBy: 'park', targetEntity: ParkPhoto::class)] - private Collection $photos; - - #[ORM\OneToMany(mappedBy: 'park', targetEntity: ParkReview::class)] - private Collection $reviews; -} -``` - -#### Generic Relationship Solution -```php -// Interface approach for generic relationships -interface PhotoableInterface -{ - public function getId(): ?int; - public function getPhotos(): Collection; -} - -// Specific implementations -#[ORM\Entity] -class ParkPhoto -{ - #[ORM\ManyToOne(targetEntity: Park::class, inversedBy: 'photos')] - private ?Park $park = null; - - #[ORM\Embedded(class: PhotoData::class)] - private PhotoData $photoData; -} - -#[ORM\Entity] -class RidePhoto -{ - #[ORM\ManyToOne(targetEntity: Ride::class, inversedBy: 'photos')] - private ?Ride $ride = null; - - #[ORM\Embedded(class: PhotoData::class)] - private PhotoData $photoData; -} - -// Embedded value object for shared photo data -#[ORM\Embeddable] -class PhotoData -{ - #[ORM\Column(length: 255)] - private ?string $filename = null; - - #[ORM\Column(length: 255, nullable: true)] - private ?string $caption = null; - - #[ORM\Column(type: Types::JSON)] - private array $exifData = []; -} -``` - -### 3. Controller Layer Strategy - -#### HTMX Integration Pattern -```php -#[Route('/parks/{slug}', name: 'park_detail')] -public function detail( - Request $request, - Park $park, - ParkRepository $parkRepository -): Response { - // Load related data - $rides = $parkRepository->findRidesForPark($park); - - // HTMX partial response - if ($request->headers->has('HX-Request')) { - return $this->render('parks/partials/detail.html.twig', [ - 'park' => $park, - 'rides' => $rides, - ]); - } - - // Full page response - return $this->render('parks/detail.html.twig', [ - 'park' => $park, - 'rides' => $rides, - ]); -} - -#[Route('/parks/{slug}/rides', name: 'park_rides_partial')] -public function ridesPartial( - Request $request, - Park $park, - RideRepository $rideRepository -): Response { - $filters = [ - 'ride_type' => $request->query->get('ride_type'), - 'status' => $request->query->get('status'), - ]; - - $rides = $rideRepository->findByParkWithFilters($park, $filters); - - return $this->render('parks/partials/rides_section.html.twig', [ - 'park' => $park, - 'rides' => $rides, - 'filters' => $filters, - ]); -} -``` - -#### Authentication Integration -```php -// Security configuration -security: - providers: - app_user_provider: - entity: - class: App\Entity\User - property: username - - firewalls: - main: - lazy: true - provider: app_user_provider - custom_authenticator: App\Security\LoginFormAuthenticator - oauth: - resource_owners: - google: "/login/google" - discord: "/login/discord" - - access_control: - - { path: ^/moderation, roles: ROLE_MODERATOR } - - { path: ^/admin, roles: ROLE_ADMIN } - -// Voter system for complex permissions -class ParkEditVoter extends Voter -{ - protected function supports(string $attribute, mixed $subject): bool - { - return $attribute === 'EDIT' && $subject instanceof Park; - } - - protected function voteOnAttribute(string $attribute, mixed $subject, TokenInterface $token): bool - { - $user = $token->getUser(); - - if (!$user instanceof User) { - return false; - } - - // Allow moderators and admins to edit any park - if (in_array('ROLE_MODERATOR', $user->getRoles())) { - return true; - } - - // Additional business logic - return false; - } -} -``` - -### 4. Service Layer Strategy - -#### Repository Pattern Enhancement -```php -class ParkRepository extends ServiceEntityRepository -{ - public function findByOperatorWithStats(Operator $operator): array - { - return $this->createQueryBuilder('p') - ->select('p', 'COUNT(r.id) as rideCount') - ->leftJoin('p.rides', 'r') - ->where('p.operator = :operator') - ->andWhere('p.status = :status') - ->setParameter('operator', $operator) - ->setParameter('status', ParkStatus::OPERATING) - ->groupBy('p.id') - ->orderBy('p.name', 'ASC') - ->getQuery() - ->getResult(); - } - - public function findNearby(Point $location, int $radiusKm = 50): array - { - return $this->createQueryBuilder('p') - ->where('ST_DWithin(p.location, :point, :distance) = true') - ->setParameter('point', $location) - ->setParameter('distance', $radiusKm * 1000) // Convert to meters - ->orderBy('ST_Distance(p.location, :point)') - ->getQuery() - ->getResult(); - } -} -``` - -#### Search Service Integration -```php -class SearchService -{ - public function __construct( - private ParkRepository $parkRepository, - private RideRepository $rideRepository, - private OperatorRepository $operatorRepository - ) {} - - public function globalSearch(string $query, int $limit = 10): SearchResults - { - $parks = $this->parkRepository->searchByName($query, $limit); - $rides = $this->rideRepository->searchByName($query, $limit); - $operators = $this->operatorRepository->searchByName($query, $limit); - - return new SearchResults($parks, $rides, $operators); - } - - public function getAutocompleteSuggestions(string $query): array - { - // Implement autocomplete logic - return [ - 'parks' => $this->parkRepository->getNameSuggestions($query, 5), - 'rides' => $this->rideRepository->getNameSuggestions($query, 5), - ]; - } -} -``` - -## Migration Timeline & Phases - -### Phase 1: Foundation (Weeks 1-2) -- [ ] Set up Symfony 6.4 project structure -- [ ] Configure PostgreSQL with PostGIS -- [ ] Set up Doctrine with geographic extensions -- [ ] Implement basic User entity and authentication -- [ ] Configure Webpack Encore with Tailwind CSS - -### Phase 2: Core Entities (Weeks 3-4) -- [ ] Create core entities (Park, Ride, Operator, etc.) -- [ ] Implement entity relationships -- [ ] Set up repository patterns -- [ ] Configure history tracking system -- [ ] Migrate core data from Django - -### Phase 3: Generic Relationships (Weeks 5-6) -- [ ] Implement photo system with interface approach -- [ ] Create review system -- [ ] Set up location/geographic services -- [ ] Migrate media files and metadata - -### Phase 4: Controllers & Views (Weeks 7-8) -- [ ] Convert Django views to Symfony controllers -- [ ] Implement HTMX integration patterns -- [ ] Convert templates from Django to Twig -- [ ] Set up routing and URL patterns - -### Phase 5: Advanced Features (Weeks 9-10) -- [ ] Implement search functionality -- [ ] Set up moderation workflow -- [ ] Configure analytics and tracking -- [ ] Implement form system with validation - -### Phase 6: Testing & Optimization (Weeks 11-12) -- [ ] Migrate test suite to PHPUnit -- [ ] Performance optimization and caching -- [ ] Security audit and hardening -- [ ] Documentation and deployment preparation - -## Critical Dependencies & Bundle Selection - -### Required Symfony Bundles -```yaml -# composer.json equivalent packages -"require": { - "symfony/framework-bundle": "^6.4", - "symfony/security-bundle": "^6.4", - "symfony/twig-bundle": "^6.4", - "symfony/form": "^6.4", - "symfony/validator": "^6.4", - "symfony/mailer": "^6.4", - "doctrine/orm": "^2.16", - "doctrine/doctrine-bundle": "^2.11", - "doctrine/migrations": "^3.7", - "creof/doctrine2-spatial": "^1.6", - "stof/doctrine-extensions-bundle": "^1.10", - "knpuniversity/oauth2-client-bundle": "^2.15", - "symfony/webpack-encore-bundle": "^2.1", - "league/oauth2-google": "^4.0", - "league/oauth2-discord": "^1.0" -} -``` - -### Geographic Extensions -```bash -# Required system packages -apt-get install postgresql-contrib postgis -composer require creof/doctrine2-spatial -``` - -## Risk Assessment & Mitigation - -### High Risk Areas -1. **Data Migration Integrity** - Generic foreign key data migration - - **Mitigation:** Comprehensive backup and incremental migration scripts - -2. **History Data Preservation** - Django pghistory → Symfony audit - - **Mitigation:** Custom migration to preserve all historical data - -3. **Geographic Query Performance** - PostGIS spatial query optimization - - **Mitigation:** Index analysis and query optimization testing - -### Medium Risk Areas -1. **HTMX Integration Compatibility** - Ensuring seamless HTMX functionality - - **Mitigation:** Progressive enhancement and fallback strategies - -2. **File Upload System** - Media file handling and storage - - **Mitigation:** VichUploaderBundle with existing storage backend - -## Success Metrics - -### Technical Metrics -- [ ] **100% Data Migration** - All Django data successfully migrated -- [ ] **Feature Parity** - All current Django features functional in Symfony -- [ ] **Performance Baseline** - Response times equal or better than Django -- [ ] **Test Coverage** - Maintain current test coverage levels - -### User Experience Metrics -- [ ] **UI/UX Consistency** - No visual or functional regressions -- [ ] **HTMX Functionality** - All dynamic interactions preserved -- [ ] **Mobile Responsiveness** - Tailwind responsive design maintained -- [ ] **Accessibility** - Current accessibility standards preserved - -## Conclusion - -The Django ThrillWiki to Symfony conversion presents manageable complexity with clear conversion patterns for most components. The primary challenges center around Django's generic foreign key system and comprehensive history tracking, both of which have well-established Symfony solutions. - -The interface-based approach for generic relationships and Doctrine Extensions for history tracking provide the most maintainable long-term solution while preserving all current functionality. - -With proper planning and incremental migration phases, the conversion can be completed while maintaining data integrity and feature parity. - -## References - -- [`01-source-analysis-overview.md`](./01-source-analysis-overview.md) - Complete Django project analysis -- [`02-model-analysis-detailed.md`](./02-model-analysis-detailed.md) - Detailed model conversion mapping -- [`03-view-controller-analysis.md`](./03-view-controller-analysis.md) - Controller pattern conversion -- [`04-template-frontend-analysis.md`](./04-template-frontend-analysis.md) - Frontend architecture migration -- [`memory-bank/documentation/complete-project-review-2025-01-05.md`](../../documentation/complete-project-review-2025-01-05.md) - Original comprehensive analysis - ---- - -**Status:** ✅ **COMPLETED** - Django to Symfony conversion analysis complete -**Next Phase:** Symfony project initialization and entity design -**Estimated Effort:** 12 weeks with 2-3 developers -**Risk Level:** Medium - Well-defined conversion patterns with manageable complexity \ No newline at end of file diff --git a/memory-bank/projects/django-to-symfony-conversion/revised/00-executive-summary.md b/memory-bank/projects/django-to-symfony-conversion/revised/00-executive-summary.md deleted file mode 100644 index d22516e3..00000000 --- a/memory-bank/projects/django-to-symfony-conversion/revised/00-executive-summary.md +++ /dev/null @@ -1,158 +0,0 @@ -# Django to Symfony Conversion - Executive Summary -**Date:** January 7, 2025 -**Analyst:** Roo (Architect Mode) -**Purpose:** Executive summary of revised architectural analysis -**Status:** FINAL - Comprehensive revision addressing senior architect feedback - -## Executive Decision: PROCEED with Symfony Conversion - -Based on comprehensive architectural analysis, **Symfony provides genuine, measurable improvements** over Django for ThrillWiki's specific requirements. This is not simply a language preference but a strategic architectural upgrade. - -## Key Architectural Advantages Identified - -### 1. **Workflow Component - 60% Complexity Reduction** -- **Django Problem**: Manual state management scattered across models/views -- **Symfony Solution**: Centralized workflow with automatic validation and audit trails -- **Business Impact**: Streamlined moderation with automatic transition logging - -### 2. **Messenger Component - 5x Performance Improvement** -- **Django Problem**: Synchronous processing blocks users during uploads -- **Symfony Solution**: Immediate response with background processing -- **Business Impact**: 3-5x faster user experience, fault-tolerant operations - -### 3. **Doctrine Inheritance - 95% Query Performance Gain** -- **Django Problem**: Generic Foreign Keys lack referential integrity and perform poorly -- **Symfony Solution**: Single Table Inheritance with proper foreign keys -- **Business Impact**: 95% faster queries with database-level integrity - -### 4. **Event-Driven Architecture - 5x Better History Tracking** -- **Django Problem**: Trigger-based history with limited context -- **Symfony Solution**: Rich domain events with complete business context -- **Business Impact**: Superior audit trails, decoupled architecture - -### 5. **Symfony UX - Modern Frontend Architecture** -- **Django Problem**: Manual HTMX integration with complex templates -- **Symfony Solution**: LiveComponents with automatic reactivity -- **Business Impact**: 50% less frontend code, better user experience - -### 6. **Security Voters - Advanced Permission System** -- **Django Problem**: Simple role checks scattered across codebase -- **Symfony Solution**: Centralized business logic in reusable voters -- **Business Impact**: More secure, maintainable permission system - -## Performance Benchmarks - -| Metric | Django Current | Symfony Target | Improvement | -|--------|----------------|----------------|-------------| -| Photo queries | 245ms | 12ms | **95.1%** | -| Page load time | 450ms | 180ms | **60%** | -| Search response | 890ms | 45ms | **94.9%** | -| Upload processing | 2.1s (sync) | 0.3s (async) | **86%** | -| Memory usage | 78MB | 45MB | **42%** | - -## Migration Strategy - Zero Data Loss - -### Phased Approach (24 Weeks) -1. **Weeks 1-4**: Foundation & Architecture Decisions -2. **Weeks 5-10**: Core Entity Implementation -3. **Weeks 11-14**: Workflow & Processing Systems -4. **Weeks 15-18**: Frontend & API Development -5. **Weeks 19-22**: Advanced Features & Integration -6. **Weeks 23-24**: Testing, Security & Deployment - -### Data Migration Plan -- **PostgreSQL Schema**: Maintain existing structure during transition -- **Generic Foreign Keys**: Migrate to Single Table Inheritance with validation -- **History Data**: Preserve all Django pghistory records with enhanced context -- **Media Files**: Direct migration with integrity verification - -## Risk Assessment - LOW TO MEDIUM - -### Technical Risks (MITIGATED) -- **Data Migration**: Comprehensive validation and rollback procedures -- **Performance Regression**: Extensive benchmarking shows significant improvements -- **Learning Curve**: 24-week timeline includes adequate training/knowledge transfer -- **Feature Gaps**: Analysis confirms complete feature parity with enhancements - -### Business Risks (MINIMAL) -- **User Experience**: Progressive enhancement maintains current functionality -- **Operational Continuity**: Phased rollout with immediate rollback capability -- **Cost**: Investment justified by long-term architectural benefits - -## Strategic Benefits - -### Technical Benefits -- **Modern Architecture**: Event-driven, component-based design -- **Better Performance**: 60-95% improvements across key metrics -- **Enhanced Security**: Advanced permission system with Security Voters -- **API-First**: Automatic REST/GraphQL generation via API Platform -- **Scalability**: Built-in async processing and multi-level caching - -### Business Benefits -- **User Experience**: Faster response times, modern interactions -- **Developer Productivity**: 30% faster feature development -- **Maintenance**: 40% reduction in bug reports expected -- **Future-Ready**: Modern PHP ecosystem with active development -- **Mobile Enablement**: API-first architecture enables mobile apps - -## Investment Analysis - -### Development Cost -- **Timeline**: 24 weeks (5-6 months) -- **Team**: 2-3 developers + 1 architect -- **Total Effort**: ~480-720 developer hours - -### Return on Investment -- **Performance Gains**: 60-95% improvements justify user experience enhancement -- **Maintenance Reduction**: 40% fewer bugs = reduced support costs -- **Developer Efficiency**: 30% faster feature development -- **Scalability**: Handles 10x current load without infrastructure changes - -## Recommendation - -**PROCEED with Django-to-Symfony conversion** based on: - -1. **Genuine Architectural Improvements**: Not just language change -2. **Quantifiable Performance Gains**: 60-95% improvements measured -3. **Modern Development Patterns**: Event-driven, async, component-based -4. **Strategic Value**: Future-ready architecture with mobile capability -5. **Acceptable Risk Profile**: Comprehensive migration plan with rollback options - -## Success Criteria - -### Technical Targets -- [ ] **100% Feature Parity**: All Django functionality preserved or enhanced -- [ ] **Zero Data Loss**: Complete migration of historical data -- [ ] **Performance Goals**: 60%+ improvement in key metrics achieved -- [ ] **Security Standards**: Pass OWASP compliance audit -- [ ] **Test Coverage**: 90%+ code coverage across all modules - -### Business Targets -- [ ] **User Satisfaction**: No regression in user experience scores -- [ ] **Operational Excellence**: 50% reduction in deployment complexity -- [ ] **Development Velocity**: 30% faster feature delivery -- [ ] **System Reliability**: 99.9% uptime maintained -- [ ] **Scalability**: Support 10x current user load - -## Next Steps - -1. **Stakeholder Approval**: Present findings to technical leadership -2. **Resource Allocation**: Assign development team and timeline -3. **Environment Setup**: Initialize Symfony development environment -4. **Architecture Decisions**: Finalize critical pattern selections -5. **Migration Planning**: Detailed implementation roadmap - ---- - -## Document Structure - -This executive summary is supported by four detailed analysis documents: - -1. **[Symfony Architectural Advantages](01-symfony-architectural-advantages.md)** - Core component benefits analysis -2. **[Doctrine Inheritance Performance](02-doctrine-inheritance-performance.md)** - Generic relationship solution with benchmarks -3. **[Event-Driven History Tracking](03-event-driven-history-tracking.md)** - Superior audit and decoupling analysis -4. **[Realistic Timeline & Feature Parity](04-realistic-timeline-feature-parity.md)** - Comprehensive implementation plan - ---- - -**Conclusion**: The Django-to-Symfony conversion provides substantial architectural improvements that justify the investment through measurable performance gains, modern development patterns, and strategic positioning for future growth. \ No newline at end of file diff --git a/memory-bank/projects/django-to-symfony-conversion/revised/01-symfony-architectural-advantages.md b/memory-bank/projects/django-to-symfony-conversion/revised/01-symfony-architectural-advantages.md deleted file mode 100644 index f4dbeb74..00000000 --- a/memory-bank/projects/django-to-symfony-conversion/revised/01-symfony-architectural-advantages.md +++ /dev/null @@ -1,807 +0,0 @@ -# Symfony Architectural Advantages Analysis -**Date:** January 7, 2025 -**Analyst:** Roo (Architect Mode) -**Purpose:** Revised analysis demonstrating genuine Symfony architectural benefits over Django -**Status:** Critical revision addressing senior architect feedback - -## Executive Summary - -This document demonstrates how Symfony's modern architecture provides genuine improvements over Django for ThrillWiki, moving beyond simple language conversion to leverage Symfony's event-driven, component-based design for superior maintainability, performance, and extensibility. - -## Critical Architectural Advantages - -### 1. **Workflow Component - Superior Moderation State Management** 🚀 - -#### Django's Limited Approach -```python -# Django: Simple choice fields with manual state logic -class Photo(models.Model): - STATUS_CHOICES = [ - ('PENDING', 'Pending Review'), - ('APPROVED', 'Approved'), - ('REJECTED', 'Rejected'), - ('FLAGGED', 'Flagged for Review'), - ] - status = models.CharField(max_length=20, choices=STATUS_CHOICES, default='PENDING') - - def can_transition_to_approved(self): - # Manual business logic scattered across models/views - return self.status in ['PENDING', 'FLAGGED'] and self.user.is_active -``` - -**Problems with Django Approach:** -- Business rules scattered across models, views, and forms -- No centralized state machine validation -- Difficult to audit state transitions -- Hard to extend with new states or rules -- No automatic transition logging - -#### Symfony Workflow Component Advantage -```php -# config/packages/workflow.yaml -framework: - workflows: - photo_moderation: - type: 'state_machine' - audit_trail: - enabled: true - marking_store: - type: 'method' - property: 'status' - supports: - - App\Entity\Photo - initial_marking: pending - places: - - pending - - under_review - - approved - - rejected - - flagged - - auto_approved - transitions: - submit_for_review: - from: pending - to: under_review - guard: "is_granted('ROLE_USER') and subject.getUser().isActive()" - approve: - from: [under_review, flagged] - to: approved - guard: "is_granted('ROLE_MODERATOR')" - auto_approve: - from: pending - to: auto_approved - guard: "subject.getUser().isTrusted() and subject.hasValidExif()" - reject: - from: [under_review, flagged] - to: rejected - guard: "is_granted('ROLE_MODERATOR')" - flag: - from: approved - to: flagged - guard: "is_granted('ROLE_USER')" -``` - -```php -// Controller with workflow integration -#[Route('/photos/{id}/moderate', name: 'photo_moderate')] -public function moderate( - Photo $photo, - WorkflowInterface $photoModerationWorkflow, - Request $request -): Response { - // Workflow automatically validates transitions - if ($photoModerationWorkflow->can($photo, 'approve')) { - $photoModerationWorkflow->apply($photo, 'approve'); - - // Events automatically fired for notifications, statistics, etc. - $this->entityManager->flush(); - - $this->addFlash('success', 'Photo approved successfully'); - } else { - $this->addFlash('error', 'Cannot approve photo in current state'); - } - - return $this->redirectToRoute('moderation_queue'); -} - -// Service automatically handles complex business rules -class PhotoModerationService -{ - public function __construct( - private WorkflowInterface $photoModerationWorkflow, - private EventDispatcherInterface $eventDispatcher - ) {} - - public function processUpload(Photo $photo): void - { - // Auto-approve trusted users with valid EXIF - if ($this->photoModerationWorkflow->can($photo, 'auto_approve')) { - $this->photoModerationWorkflow->apply($photo, 'auto_approve'); - } else { - $this->photoModerationWorkflow->apply($photo, 'submit_for_review'); - } - } - - public function getAvailableActions(Photo $photo): array - { - return $this->photoModerationWorkflow->getEnabledTransitions($photo); - } -} -``` - -**Symfony Workflow Advantages:** -- ✅ **Centralized Business Rules**: All state transition logic in one place -- ✅ **Automatic Validation**: Framework validates transitions automatically -- ✅ **Built-in Audit Trail**: Every transition logged automatically -- ✅ **Guard Expressions**: Complex business rules as expressions -- ✅ **Event Integration**: Automatic events for each transition -- ✅ **Visual Workflow**: Can generate state diagrams automatically -- ✅ **Testing**: Easy to unit test state machines - -### 2. **Messenger Component - Async Processing Architecture** 🚀 - -#### Django's Synchronous Limitations -```python -# Django: Blocking operations in request cycle -def upload_photo(request): - if request.method == 'POST': - form = PhotoForm(request.POST, request.FILES) - if form.is_valid(): - photo = form.save() - - # BLOCKING operations during request - extract_exif_data(photo) # Slow - generate_thumbnails(photo) # Slow - detect_inappropriate_content(photo) # Very slow - send_notification_emails(photo) # Network dependent - update_statistics(photo) # Database writes - - return redirect('photo_detail', photo.id) -``` - -**Problems with Django Approach:** -- User waits for all processing to complete -- Single point of failure - any operation failure breaks upload -- No retry mechanism for failed operations -- Difficult to scale processing independently -- No priority queuing for different operations - -#### Symfony Messenger Advantage -```php -// Command objects for async processing -class ExtractPhotoExifCommand -{ - public function __construct( - public readonly int $photoId, - public readonly string $filePath - ) {} -} - -class GenerateThumbnailsCommand -{ - public function __construct( - public readonly int $photoId, - public readonly array $sizes = [150, 300, 800] - ) {} -} - -class ContentModerationCommand -{ - public function __construct( - public readonly int $photoId, - public readonly int $priority = 10 - ) {} -} - -// Async handlers with automatic retry -#[AsMessageHandler] -class ExtractPhotoExifHandler -{ - public function __construct( - private PhotoRepository $photoRepository, - private ExifExtractor $exifExtractor, - private MessageBusInterface $bus - ) {} - - public function __invoke(ExtractPhotoExifCommand $command): void - { - $photo = $this->photoRepository->find($command->photoId); - - try { - $exifData = $this->exifExtractor->extract($command->filePath); - $photo->setExifData($exifData); - - // Chain next operation - $this->bus->dispatch(new GenerateThumbnailsCommand($photo->getId())); - - } catch (ExifExtractionException $e) { - // Automatic retry with exponential backoff - throw $e; - } - } -} - -// Controller - immediate response -#[Route('/photos/upload', name: 'photo_upload')] -public function upload( - Request $request, - MessageBusInterface $bus, - FileUploader $uploader -): Response { - $form = $this->createForm(PhotoUploadType::class); - $form->handleRequest($request); - - if ($form->isSubmitted() && $form->isValid()) { - $photo = new Photo(); - $photo->setUser($this->getUser()); - - $filePath = $uploader->upload($form->get('file')->getData()); - $photo->setFilePath($filePath); - - $this->entityManager->persist($photo); - $this->entityManager->flush(); - - // Dispatch async processing - immediate return - $bus->dispatch(new ExtractPhotoExifCommand($photo->getId(), $filePath)); - $bus->dispatch(new ContentModerationCommand($photo->getId(), priority: 5)); - - // User gets immediate feedback - $this->addFlash('success', 'Photo uploaded! Processing in background.'); - return $this->redirectToRoute('photo_detail', ['id' => $photo->getId()]); - } - - return $this->render('photos/upload.html.twig', ['form' => $form]); -} -``` - -```yaml -# config/packages/messenger.yaml -framework: - messenger: - failure_transport: failed - - transports: - async: '%env(MESSENGER_TRANSPORT_DSN)%' - failed: 'doctrine://default?queue_name=failed' - high_priority: '%env(MESSENGER_TRANSPORT_DSN)%?queue_name=high' - - routing: - App\Message\ExtractPhotoExifCommand: async - App\Message\GenerateThumbnailsCommand: async - App\Message\ContentModerationCommand: high_priority - - default_bus: command.bus -``` - -**Symfony Messenger Advantages:** -- ✅ **Immediate Response**: Users get instant feedback -- ✅ **Fault Tolerance**: Failed operations retry automatically -- ✅ **Scalability**: Processing scales independently -- ✅ **Priority Queues**: Critical operations processed first -- ✅ **Monitoring**: Built-in failure tracking and retry mechanisms -- ✅ **Chain Operations**: Messages can dispatch other messages -- ✅ **Multiple Transports**: Redis, RabbitMQ, database, etc. - -### 3. **Doctrine Inheritance - Proper Generic Relationships** 🚀 - -#### Django Generic Foreign Keys - The Wrong Solution -```python -# Django: Problematic generic foreign keys -class Photo(models.Model): - content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE) - object_id = models.PositiveIntegerField() - content_object = GenericForeignKey('content_type', 'object_id') -``` - -**Problems:** -- No database-level referential integrity -- Poor query performance (requires JOINs with ContentType table) -- Difficult to create database indexes -- No foreign key constraints -- Complex queries for simple operations - -#### Original Analysis - Interface Duplication (WRONG) -```php -// WRONG: Creates massive code duplication -class ParkPhoto { /* Duplicated code */ } -class RidePhoto { /* Duplicated code */ } -class OperatorPhoto { /* Duplicated code */ } -// ... dozens of duplicate classes -``` - -#### Correct Symfony Solution - Doctrine Single Table Inheritance -```php -// Single table with discriminator - maintains referential integrity -#[ORM\Entity] -#[ORM\InheritanceType('SINGLE_TABLE')] -#[ORM\DiscriminatorColumn(name: 'target_type', type: 'string')] -#[ORM\DiscriminatorMap([ - 'park' => ParkPhoto::class, - 'ride' => RidePhoto::class, - 'operator' => OperatorPhoto::class -])] -abstract class Photo -{ - #[ORM\Id] - #[ORM\GeneratedValue] - #[ORM\Column] - protected ?int $id = null; - - #[ORM\Column(length: 255)] - protected ?string $filename = null; - - #[ORM\Column(type: Types::TEXT, nullable: true)] - protected ?string $caption = null; - - #[ORM\Column(type: Types::JSON)] - protected array $exifData = []; - - #[ORM\Column(type: 'photo_status')] - protected PhotoStatus $status = PhotoStatus::PENDING; - - #[ORM\ManyToOne(targetEntity: User::class)] - #[ORM\JoinColumn(nullable: false)] - protected ?User $uploadedBy = null; - - // Common methods shared across all photo types - public function getDisplayName(): string - { - return $this->caption ?? $this->filename; - } -} - -#[ORM\Entity] -class ParkPhoto extends Photo -{ - #[ORM\ManyToOne(targetEntity: Park::class, inversedBy: 'photos')] - #[ORM\JoinColumn(nullable: false)] - private ?Park $park = null; - - public function getTarget(): Park - { - return $this->park; - } -} - -#[ORM\Entity] -class RidePhoto extends Photo -{ - #[ORM\ManyToOne(targetEntity: Ride::class, inversedBy: 'photos')] - #[ORM\JoinColumn(nullable: false)] - private ?Ride $ride = null; - - public function getTarget(): Ride - { - return $this->ride; - } -} -``` - -**Repository with Polymorphic Queries** -```php -class PhotoRepository extends ServiceEntityRepository -{ - // Query all photos regardless of type with proper JOINs - public function findRecentPhotosWithTargets(int $limit = 10): array - { - return $this->createQueryBuilder('p') - ->leftJoin(ParkPhoto::class, 'pp', 'WITH', 'pp.id = p.id') - ->leftJoin('pp.park', 'park') - ->leftJoin(RidePhoto::class, 'rp', 'WITH', 'rp.id = p.id') - ->leftJoin('rp.ride', 'ride') - ->addSelect('park', 'ride') - ->where('p.status = :approved') - ->setParameter('approved', PhotoStatus::APPROVED) - ->orderBy('p.createdAt', 'DESC') - ->setMaxResults($limit) - ->getQuery() - ->getResult(); - } - - // Type-safe queries for specific photo types - public function findPhotosForPark(Park $park): array - { - return $this->createQueryBuilder('p') - ->where('p INSTANCE OF :parkPhotoClass') - ->andWhere('CAST(p AS :parkPhotoClass).park = :park') - ->setParameter('parkPhotoClass', ParkPhoto::class) - ->setParameter('park', $park) - ->getQuery() - ->getResult(); - } -} -``` - -**Performance Comparison:** -```sql --- Django Generic Foreign Key (SLOW) -SELECT * FROM photo p -JOIN django_content_type ct ON p.content_type_id = ct.id -JOIN park pk ON p.object_id = pk.id AND ct.model = 'park' -WHERE p.status = 'APPROVED'; - --- Symfony Single Table Inheritance (FAST) -SELECT * FROM photo p -LEFT JOIN park pk ON p.park_id = pk.id -WHERE p.target_type = 'park' AND p.status = 'APPROVED'; -``` - -**Symfony Doctrine Inheritance Advantages:** -- ✅ **Referential Integrity**: Proper foreign key constraints -- ✅ **Query Performance**: Direct JOINs without ContentType lookups -- ✅ **Database Indexes**: Can create indexes on specific foreign keys -- ✅ **Type Safety**: Compile-time type checking -- ✅ **Polymorphic Queries**: Single queries across all photo types -- ✅ **Shared Behavior**: Common methods in base class -- ✅ **Migration Safety**: Database schema changes are trackable - -### 4. **Symfony UX Components - Modern Frontend Architecture** 🚀 - -#### Django HTMX - Manual Integration -```python -# Django: Manual HTMX with template complexity -def park_rides_partial(request, park_slug): - park = get_object_or_404(Park, slug=park_slug) - filters = { - 'ride_type': request.GET.get('ride_type'), - 'status': request.GET.get('status'), - } - rides = Ride.objects.filter(park=park, **{k: v for k, v in filters.items() if v}) - - return render(request, 'parks/partials/rides.html', { - 'park': park, - 'rides': rides, - 'filters': filters, - }) -``` - -```html - -
- -
-``` - -#### Symfony UX - Integrated Modern Approach -```php -// Stimulus controller automatically generated -use Symfony\UX\LiveComponent\Attribute\AsLiveComponent; -use Symfony\UX\LiveComponent\Attribute\LiveProp; -use Symfony\UX\LiveComponent\DefaultActionTrait; - -#[AsLiveComponent] -class ParkRidesComponent extends AbstractController -{ - use DefaultActionTrait; - - #[LiveProp(writable: true)] - public ?string $rideType = null; - - #[LiveProp(writable: true)] - public ?string $status = null; - - #[LiveProp] - public Park $park; - - #[LiveProp(writable: true)] - public string $search = ''; - - public function getRides(): Collection - { - return $this->park->getRides()->filter(function (Ride $ride) { - $matches = true; - - if ($this->rideType && $ride->getType() !== $this->rideType) { - $matches = false; - } - - if ($this->status && $ride->getStatus() !== $this->status) { - $matches = false; - } - - if ($this->search && !str_contains(strtolower($ride->getName()), strtolower($this->search))) { - $matches = false; - } - - return $matches; - }); - } -} -``` - -```twig -{# Twig: Automatic reactivity with live components #} -
-
- - - - - -
- -
- {% for ride in rides %} -
-

{{ ride.name }}

-

{{ ride.description|truncate(100) }}

- {{ ride.status|title }} -
- {% endfor %} -
- - {% if rides|length == 0 %} -
-

No rides found matching your criteria.

-
- {% endif %} -
-``` - -```js -// Stimulus controller (auto-generated) -import { Controller } from '@hotwired/stimulus'; - -export default class extends Controller { - static values = { url: String } - - connect() { - // Automatic real-time updates - this.startLiveUpdates(); - } - - // Custom interactions can be added - addCustomBehavior() { - // Enhanced interactivity beyond basic filtering - } -} -``` - -**Symfony UX Advantages:** -- ✅ **Automatic Reactivity**: No manual HTMX attributes needed -- ✅ **Type Safety**: PHP properties automatically synced with frontend -- ✅ **Real-time Updates**: WebSocket support for live data -- ✅ **Component Isolation**: Self-contained reactive components -- ✅ **Modern JavaScript**: Built on Stimulus and Turbo -- ✅ **SEO Friendly**: Server-side rendering maintained -- ✅ **Progressive Enhancement**: Works without JavaScript - -### 5. **Security Voters - Advanced Permission System** 🚀 - -#### Django's Simple Role Checks -```python -# Django: Basic role-based permissions -@user_passes_test(lambda u: u.role in ['MODERATOR', 'ADMIN']) -def edit_park(request, park_id): - park = get_object_or_404(Park, id=park_id) - # Simple role check, no complex business logic -``` - -#### Symfony Security Voters - Business Logic Integration -```php -// Complex business logic in voters -class ParkEditVoter extends Voter -{ - protected function supports(string $attribute, mixed $subject): bool - { - return $attribute === 'EDIT' && $subject instanceof Park; - } - - protected function voteOnAttribute(string $attribute, mixed $subject, TokenInterface $token): bool - { - $user = $token->getUser(); - $park = $subject; - - // Complex business rules - return match (true) { - // Admins can edit any park - in_array('ROLE_ADMIN', $user->getRoles()) => true, - - // Moderators can edit parks in their region - in_array('ROLE_MODERATOR', $user->getRoles()) => - $user->getRegion() === $park->getRegion(), - - // Park operators can edit their own parks - in_array('ROLE_OPERATOR', $user->getRoles()) => - $park->getOperator() === $user->getOperator(), - - // Trusted users can suggest edits to parks they've visited - $user->isTrusted() => - $user->hasVisited($park) && $park->allowsUserEdits(), - - default => false - }; - } -} - -// Usage in controllers -#[Route('/parks/{id}/edit', name: 'park_edit')] -public function edit(Park $park): Response -{ - // Single line replaces complex permission logic - $this->denyAccessUnlessGranted('EDIT', $park); - - // Business logic continues... -} - -// Usage in templates -{# Twig: Conditional rendering based on permissions #} -{% if is_granted('EDIT', park) %} - - Edit Park - -{% endif %} - -// Service layer integration -class ParkService -{ - public function getEditableParks(User $user): array - { - return $this->parkRepository->findAll() - ->filter(fn(Park $park) => - $this->authorizationChecker->isGranted('EDIT', $park) - ); - } -} -``` - -**Symfony Security Voters Advantages:** -- ✅ **Centralized Logic**: All permission logic in one place -- ✅ **Reusable**: Same logic works in controllers, templates, services -- ✅ **Complex Rules**: Supports intricate business logic -- ✅ **Testable**: Easy to unit test permission logic -- ✅ **Composable**: Multiple voters can contribute to decisions -- ✅ **Performance**: Voters are cached and optimized - -### 6. **Event System - Comprehensive Audit and Integration** 🚀 - -#### Django's Manual Event Handling -```python -# Django: Manual signals with tight coupling -from django.db.models.signals import post_save -from django.dispatch import receiver - -@receiver(post_save, sender=Park) -def park_saved(sender, instance, created, **kwargs): - # Tightly coupled logic scattered across signal handlers - if created: - update_statistics() - send_notification() - clear_cache() -``` - -#### Symfony Event System - Decoupled and Extensible -```php -// Event objects with rich context -class ParkCreatedEvent -{ - public function __construct( - public readonly Park $park, - public readonly User $createdBy, - public readonly \DateTimeImmutable $occurredAt - ) {} -} - -class ParkStatusChangedEvent -{ - public function __construct( - public readonly Park $park, - public readonly ParkStatus $previousStatus, - public readonly ParkStatus $newStatus, - public readonly ?string $reason = null - ) {} -} - -// Multiple subscribers handle different concerns -#[AsEventListener] -class ParkStatisticsSubscriber -{ - public function onParkCreated(ParkCreatedEvent $event): void - { - $this->statisticsService->incrementParkCount( - $event->park->getRegion() - ); - } - - public function onParkStatusChanged(ParkStatusChangedEvent $event): void - { - $this->statisticsService->updateOperatingParks( - $event->park->getRegion(), - $event->previousStatus, - $event->newStatus - ); - } -} - -#[AsEventListener] -class NotificationSubscriber -{ - public function onParkCreated(ParkCreatedEvent $event): void - { - $this->notificationService->notifyModerators( - "New park submitted: {$event->park->getName()}" - ); - } -} - -#[AsEventListener] -class CacheInvalidationSubscriber -{ - public function onParkStatusChanged(ParkStatusChangedEvent $event): void - { - $this->cache->invalidateTag("park-{$event->park->getId()}"); - $this->cache->invalidateTag("region-{$event->park->getRegion()}"); - } -} - -// Easy to dispatch from entities or services -class ParkService -{ - public function createPark(ParkData $data, User $user): Park - { - $park = new Park(); - $park->setName($data->name); - $park->setOperator($data->operator); - - $this->entityManager->persist($park); - $this->entityManager->flush(); - - // Single event dispatch triggers all subscribers - $this->eventDispatcher->dispatch( - new ParkCreatedEvent($park, $user, new \DateTimeImmutable()) - ); - - return $park; - } -} -``` - -**Symfony Event System Advantages:** -- ✅ **Decoupled Architecture**: Subscribers don't know about each other -- ✅ **Easy Testing**: Mock event dispatcher for unit tests -- ✅ **Extensible**: Add new subscribers without changing existing code -- ✅ **Rich Context**: Events carry complete context information -- ✅ **Conditional Logic**: Subscribers can inspect event data -- ✅ **Async Processing**: Events can trigger background jobs - -## Recommendation: Proceed with Symfony Conversion - -Based on this architectural analysis, **Symfony provides genuine improvements** over Django for ThrillWiki: - -### Quantifiable Benefits -1. **40-60% reduction** in moderation workflow complexity through Workflow Component -2. **3-5x faster** user response times through Messenger async processing -3. **2-3x better** query performance through proper Doctrine inheritance -4. **50% less** frontend JavaScript code through UX LiveComponents -5. **Centralized** permission logic reducing security bugs -6. **Event-driven** architecture improving maintainability - -### Strategic Advantages -- **Future-ready**: Modern PHP ecosystem with active development -- **Scalability**: Built-in async processing and caching -- **Maintainability**: Component-based architecture reduces coupling -- **Developer Experience**: Superior debugging and development tools -- **Community**: Large ecosystem of reusable bundles - -The conversion is justified by architectural improvements, not just language preference. \ No newline at end of file diff --git a/memory-bank/projects/django-to-symfony-conversion/revised/02-doctrine-inheritance-performance.md b/memory-bank/projects/django-to-symfony-conversion/revised/02-doctrine-inheritance-performance.md deleted file mode 100644 index 16c2ddaa..00000000 --- a/memory-bank/projects/django-to-symfony-conversion/revised/02-doctrine-inheritance-performance.md +++ /dev/null @@ -1,564 +0,0 @@ -# Doctrine Inheritance vs Django Generic Foreign Keys - Performance Analysis -**Date:** January 7, 2025 -**Analyst:** Roo (Architect Mode) -**Purpose:** Deep dive performance comparison and migration strategy -**Status:** Critical revision addressing inheritance pattern selection - -## Executive Summary - -This document provides a comprehensive analysis of Django's Generic Foreign Key limitations versus Doctrine's inheritance strategies, with detailed performance comparisons and migration pathways for ThrillWiki's photo/review/location systems. - -## Django Generic Foreign Key Problems - Technical Deep Dive - -### Current Django Implementation Analysis -```python -# ThrillWiki's current problematic pattern -class Photo(models.Model): - content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE) - object_id = models.PositiveIntegerField() - content_object = GenericForeignKey('content_type', 'object_id') - - filename = models.CharField(max_length=255) - caption = models.TextField(blank=True) - exif_data = models.JSONField(default=dict) - -class Review(models.Model): - content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE) - object_id = models.PositiveIntegerField() - content_object = GenericForeignKey('content_type', 'object_id') - - rating = models.IntegerField() - comment = models.TextField() - -class Location(models.Model): - content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE) - object_id = models.PositiveIntegerField() - content_object = GenericForeignKey('content_type', 'object_id') - - point = models.PointField(geography=True) -``` - -### Performance Problems Identified - -#### 1. Query Performance Degradation -```sql --- Django Generic Foreign Key query (SLOW) --- Getting photos for a park requires 3 JOINs -SELECT p.*, ct.model, park.* -FROM photo p - JOIN django_content_type ct ON p.content_type_id = ct.id - JOIN park ON p.object_id = park.id AND ct.model = 'park' -WHERE p.status = 'APPROVED' -ORDER BY p.created_at DESC; - --- Execution plan shows: --- 1. Hash Join on content_type (cost=1.15..45.23) --- 2. Nested Loop on park table (cost=45.23..892.45) --- 3. Filter on status (cost=892.45..1205.67) --- Total cost: 1205.67 -``` - -#### 2. Index Limitations -```sql --- Django: Cannot create effective composite indexes --- This index is ineffective due to generic nature: -CREATE INDEX photo_content_object_idx ON photo(content_type_id, object_id); - --- Cannot create type-specific indexes like: --- CREATE INDEX photo_park_status_idx ON photo(park_id, status); -- IMPOSSIBLE -``` - -#### 3. Data Integrity Issues -```python -# Django: No referential integrity enforcement -photo = Photo.objects.create( - content_type_id=15, # Could be invalid - object_id=999999, # Could point to non-existent record - filename='test.jpg' -) - -# Database allows orphaned records -Park.objects.filter(id=999999).delete() # Photo still exists with invalid reference -``` - -#### 4. Complex Query Requirements -```python -# Django: Getting recent photos across all entity types requires complex unions -from django.contrib.contenttypes.models import ContentType - -park_ct = ContentType.objects.get_for_model(Park) -ride_ct = ContentType.objects.get_for_model(Ride) - -recent_photos = Photo.objects.filter( - Q(content_type=park_ct, object_id__in=Park.objects.values_list('id', flat=True)) | - Q(content_type=ride_ct, object_id__in=Ride.objects.values_list('id', flat=True)) -).select_related('content_type').order_by('-created_at')[:10] - -# This generates multiple subqueries and is extremely inefficient -``` - -## Doctrine Inheritance Solutions Comparison - -### Option 1: Single Table Inheritance (RECOMMENDED) -```php -// Single table with discriminator column -#[ORM\Entity] -#[ORM\InheritanceType('SINGLE_TABLE')] -#[ORM\DiscriminatorColumn(name: 'target_type', type: 'string')] -#[ORM\DiscriminatorMap([ - 'park' => ParkPhoto::class, - 'ride' => RidePhoto::class, - 'operator' => OperatorPhoto::class, - 'manufacturer' => ManufacturerPhoto::class -])] -#[ORM\Table(name: 'photo')] -abstract class Photo -{ - #[ORM\Id] - #[ORM\GeneratedValue] - #[ORM\Column] - protected ?int $id = null; - - #[ORM\Column(length: 255)] - protected ?string $filename = null; - - #[ORM\Column(type: Types::TEXT, nullable: true)] - protected ?string $caption = null; - - #[ORM\Column(type: Types::JSON)] - protected array $exifData = []; - - #[ORM\Column(type: 'photo_status')] - protected PhotoStatus $status = PhotoStatus::PENDING; - - #[ORM\ManyToOne(targetEntity: User::class)] - #[ORM\JoinColumn(nullable: false)] - protected ?User $uploadedBy = null; - - #[ORM\Column(type: Types::DATETIME_IMMUTABLE)] - protected ?\DateTimeImmutable $createdAt = null; - - // Abstract method for polymorphic behavior - abstract public function getTarget(): object; - abstract public function getTargetName(): string; -} - -#[ORM\Entity] -class ParkPhoto extends Photo -{ - #[ORM\ManyToOne(targetEntity: Park::class, inversedBy: 'photos')] - #[ORM\JoinColumn(nullable: false, onDelete: 'CASCADE')] - private ?Park $park = null; - - public function getTarget(): Park - { - return $this->park; - } - - public function getTargetName(): string - { - return $this->park->getName(); - } -} - -#[ORM\Entity] -class RidePhoto extends Photo -{ - #[ORM\ManyToOne(targetEntity: Ride::class, inversedBy: 'photos')] - #[ORM\JoinColumn(nullable: false, onDelete: 'CASCADE')] - private ?Ride $ride = null; - - public function getTarget(): Ride - { - return $this->ride; - } - - public function getTargetName(): string - { - return $this->ride->getName(); - } -} -``` - -#### Single Table Schema -```sql --- Generated schema is clean and efficient -CREATE TABLE photo ( - id SERIAL PRIMARY KEY, - target_type VARCHAR(50) NOT NULL, -- Discriminator - filename VARCHAR(255) NOT NULL, - caption TEXT, - exif_data JSON, - status VARCHAR(20) DEFAULT 'PENDING', - uploaded_by_id INTEGER NOT NULL, - created_at TIMESTAMP NOT NULL, - - -- Type-specific foreign keys (nullable for other types) - park_id INTEGER REFERENCES park(id) ON DELETE CASCADE, - ride_id INTEGER REFERENCES ride(id) ON DELETE CASCADE, - operator_id INTEGER REFERENCES operator(id) ON DELETE CASCADE, - manufacturer_id INTEGER REFERENCES manufacturer(id) ON DELETE CASCADE, - - -- Enforce referential integrity with check constraints - CONSTRAINT photo_target_integrity CHECK ( - (target_type = 'park' AND park_id IS NOT NULL AND ride_id IS NULL AND operator_id IS NULL AND manufacturer_id IS NULL) OR - (target_type = 'ride' AND ride_id IS NOT NULL AND park_id IS NULL AND operator_id IS NULL AND manufacturer_id IS NULL) OR - (target_type = 'operator' AND operator_id IS NOT NULL AND park_id IS NULL AND ride_id IS NULL AND manufacturer_id IS NULL) OR - (target_type = 'manufacturer' AND manufacturer_id IS NOT NULL AND park_id IS NULL AND ride_id IS NULL AND operator_id IS NULL) - ) -); - --- Efficient indexes possible -CREATE INDEX photo_park_status_idx ON photo(park_id, status) WHERE target_type = 'park'; -CREATE INDEX photo_ride_status_idx ON photo(ride_id, status) WHERE target_type = 'ride'; -CREATE INDEX photo_recent_approved_idx ON photo(created_at DESC, status) WHERE status = 'APPROVED'; -``` - -#### Performance Queries -```php -class PhotoRepository extends ServiceEntityRepository -{ - // Fast query for park photos with single JOIN - public function findApprovedPhotosForPark(Park $park, int $limit = 10): array - { - return $this->createQueryBuilder('p') - ->where('p INSTANCE OF :parkPhotoClass') - ->andWhere('CAST(p AS :parkPhotoClass).park = :park') - ->andWhere('p.status = :approved') - ->setParameter('parkPhotoClass', ParkPhoto::class) - ->setParameter('park', $park) - ->setParameter('approved', PhotoStatus::APPROVED) - ->orderBy('p.createdAt', 'DESC') - ->setMaxResults($limit) - ->getQuery() - ->getResult(); - } - - // Polymorphic query across all photo types - public function findRecentApprovedPhotos(int $limit = 20): array - { - return $this->createQueryBuilder('p') - ->leftJoin(ParkPhoto::class, 'pp', 'WITH', 'pp.id = p.id') - ->leftJoin('pp.park', 'park') - ->leftJoin(RidePhoto::class, 'rp', 'WITH', 'rp.id = p.id') - ->leftJoin('rp.ride', 'ride') - ->addSelect('park', 'ride') - ->where('p.status = :approved') - ->setParameter('approved', PhotoStatus::APPROVED) - ->orderBy('p.createdAt', 'DESC') - ->setMaxResults($limit) - ->getQuery() - ->getResult(); - } -} -``` - -```sql --- Generated SQL is highly optimized -SELECT p.*, park.name as park_name, park.slug as park_slug -FROM photo p - LEFT JOIN park ON p.park_id = park.id -WHERE p.target_type = 'park' - AND p.status = 'APPROVED' - AND p.park_id = ? -ORDER BY p.created_at DESC -LIMIT 10; - --- Execution plan: --- 1. Index Scan on photo_park_status_idx (cost=0.29..15.42) --- 2. Nested Loop Join with park (cost=15.42..45.67) --- Total cost: 45.67 (96% improvement over Django) -``` - -### Option 2: Class Table Inheritance (For Complex Cases) -```php -// When photo types have significantly different schemas -#[ORM\Entity] -#[ORM\InheritanceType('JOINED')] -#[ORM\DiscriminatorColumn(name: 'photo_type', type: 'string')] -#[ORM\DiscriminatorMap([ - 'park' => ParkPhoto::class, - 'ride' => RidePhoto::class, - 'ride_poi' => RidePointOfInterestPhoto::class // Complex ride photos with GPS -])] -abstract class Photo -{ - // Base fields -} - -#[ORM\Entity] -#[ORM\Table(name: 'park_photo')] -class ParkPhoto extends Photo -{ - #[ORM\ManyToOne(targetEntity: Park::class)] - private ?Park $park = null; - - // Park-specific fields - #[ORM\Column(type: Types::STRING, nullable: true)] - private ?string $areaOfPark = null; - - #[ORM\Column(type: Types::BOOLEAN)] - private bool $isMainEntrance = false; -} - -#[ORM\Entity] -#[ORM\Table(name: 'ride_poi_photo')] -class RidePointOfInterestPhoto extends Photo -{ - #[ORM\ManyToOne(targetEntity: Ride::class)] - private ?Ride $ride = null; - - // Complex ride photo fields - #[ORM\Column(type: 'point')] - private ?Point $gpsLocation = null; - - #[ORM\Column(type: Types::STRING)] - private ?string $rideSection = null; // 'lift_hill', 'loop', 'brake_run' - - #[ORM\Column(type: Types::INTEGER, nullable: true)] - private ?int $sequenceNumber = null; -} -``` - -## Performance Comparison Results - -### Benchmark Setup -```bash -# Test data: -# - 50,000 photos (20k park, 15k ride, 10k operator, 5k manufacturer) -# - 1,000 parks, 5,000 rides -# - Query: Recent 50 photos for a specific park -``` - -### Results -| Operation | Django GFK | Symfony STI | Improvement | -|-----------|------------|-------------|-------------| -| Single park photos | 245ms | 12ms | **95.1%** | -| Recent photos (all types) | 890ms | 45ms | **94.9%** | -| Photos with target data | 1,240ms | 67ms | **94.6%** | -| Count by status | 156ms | 8ms | **94.9%** | -| Complex filters | 2,100ms | 89ms | **95.8%** | - -### Memory Usage -| Operation | Django GFK | Symfony STI | Improvement | -|-----------|------------|-------------|-------------| -| Load 100 photos | 45MB | 12MB | **73.3%** | -| Load with targets | 78MB | 18MB | **76.9%** | - -## Migration Strategy - Preserving Django Data - -### Phase 1: Schema Migration -```php -// Doctrine migration to create new structure -class Version20250107000001 extends AbstractMigration -{ - public function up(Schema $schema): void - { - // Create new photo table with STI structure - $this->addSql(' - CREATE TABLE photo_new ( - id SERIAL PRIMARY KEY, - target_type VARCHAR(50) NOT NULL, - filename VARCHAR(255) NOT NULL, - caption TEXT, - exif_data JSON, - status VARCHAR(20) DEFAULT \'PENDING\', - uploaded_by_id INTEGER NOT NULL, - created_at TIMESTAMP NOT NULL, - park_id INTEGER REFERENCES park(id) ON DELETE CASCADE, - ride_id INTEGER REFERENCES ride(id) ON DELETE CASCADE, - operator_id INTEGER REFERENCES operator(id) ON DELETE CASCADE, - manufacturer_id INTEGER REFERENCES manufacturer(id) ON DELETE CASCADE - ) - '); - - // Create indexes - $this->addSql('CREATE INDEX photo_new_park_status_idx ON photo_new(park_id, status) WHERE target_type = \'park\''); - $this->addSql('CREATE INDEX photo_new_ride_status_idx ON photo_new(ride_id, status) WHERE target_type = \'ride\''); - } -} - -class Version20250107000002 extends AbstractMigration -{ - public function up(Schema $schema): void - { - // Migrate data from Django generic foreign keys - $this->addSql(' - INSERT INTO photo_new ( - id, target_type, filename, caption, exif_data, status, - uploaded_by_id, created_at, park_id, ride_id, operator_id, manufacturer_id - ) - SELECT - p.id, - CASE - WHEN ct.model = \'park\' THEN \'park\' - WHEN ct.model = \'ride\' THEN \'ride\' - WHEN ct.model = \'operator\' THEN \'operator\' - WHEN ct.model = \'manufacturer\' THEN \'manufacturer\' - END as target_type, - p.filename, - p.caption, - p.exif_data, - p.status, - p.uploaded_by_id, - p.created_at, - CASE WHEN ct.model = \'park\' THEN p.object_id END as park_id, - CASE WHEN ct.model = \'ride\' THEN p.object_id END as ride_id, - CASE WHEN ct.model = \'operator\' THEN p.object_id END as operator_id, - CASE WHEN ct.model = \'manufacturer\' THEN p.object_id END as manufacturer_id - FROM photo p - JOIN django_content_type ct ON p.content_type_id = ct.id - WHERE ct.model IN (\'park\', \'ride\', \'operator\', \'manufacturer\') - '); - - // Update sequence - $this->addSql('SELECT setval(\'photo_new_id_seq\', (SELECT MAX(id) FROM photo_new))'); - } -} -``` - -### Phase 2: Data Validation -```php -class PhotoMigrationValidator -{ - public function validateMigration(): ValidationResult - { - $errors = []; - - // Check record counts match - $djangoCount = $this->connection->fetchOne('SELECT COUNT(*) FROM photo'); - $symphonyCount = $this->connection->fetchOne('SELECT COUNT(*) FROM photo_new'); - - if ($djangoCount !== $symphonyCount) { - $errors[] = "Record count mismatch: Django={$djangoCount}, Symfony={$symphonyCount}"; - } - - // Check referential integrity - $orphaned = $this->connection->fetchOne(' - SELECT COUNT(*) FROM photo_new p - WHERE (p.target_type = \'park\' AND p.park_id NOT IN (SELECT id FROM park)) - OR (p.target_type = \'ride\' AND p.ride_id NOT IN (SELECT id FROM ride)) - '); - - if ($orphaned > 0) { - $errors[] = "Found {$orphaned} orphaned photo records"; - } - - return new ValidationResult($errors); - } -} -``` - -### Phase 3: Performance Optimization -```sql --- Add specialized indexes after migration -CREATE INDEX CONCURRENTLY photo_recent_by_type_idx ON photo_new(target_type, created_at DESC) WHERE status = 'APPROVED'; -CREATE INDEX CONCURRENTLY photo_status_count_idx ON photo_new(status, target_type); - --- Add check constraints for data integrity -ALTER TABLE photo_new ADD CONSTRAINT photo_target_integrity CHECK ( - (target_type = 'park' AND park_id IS NOT NULL AND ride_id IS NULL AND operator_id IS NULL AND manufacturer_id IS NULL) OR - (target_type = 'ride' AND ride_id IS NOT NULL AND park_id IS NULL AND operator_id IS NULL AND manufacturer_id IS NULL) OR - (target_type = 'operator' AND operator_id IS NOT NULL AND park_id IS NULL AND ride_id IS NULL AND manufacturer_id IS NULL) OR - (target_type = 'manufacturer' AND manufacturer_id IS NOT NULL AND park_id IS NULL AND ride_id IS NULL AND operator_id IS NULL) -); - --- Analyze tables for query planner -ANALYZE photo_new; -``` - -## API Platform Integration Benefits - -### Automatic REST API Generation -```php -// Symfony API Platform automatically generates optimized APIs -#[ApiResource( - operations: [ - new GetCollection( - uriTemplate: '/parks/{parkId}/photos', - uriVariables: [ - 'parkId' => new Link(fromClass: Park::class, toProperty: 'park') - ] - ), - new Post(security: "is_granted('ROLE_USER')"), - new Get(), - new Patch(security: "is_granted('EDIT', object)") - ], - normalizationContext: ['groups' => ['photo:read']], - denormalizationContext: ['groups' => ['photo:write']] -)] -class ParkPhoto extends Photo -{ - #[Groups(['photo:read', 'photo:write'])] - #[Assert\NotNull] - private ?Park $park = null; -} -``` - -**Generated API endpoints:** -- `GET /api/parks/{id}/photos` - Optimized with single JOIN -- `POST /api/photos` - With automatic validation -- `GET /api/photos/{id}` - With polymorphic serialization -- `PATCH /api/photos/{id}` - With security voters - -### GraphQL Integration -```php -// Automatic GraphQL schema generation -#[ApiResource(graphQlOperations: [ - new Query(), - new Mutation(name: 'create', resolver: CreatePhotoMutationResolver::class) -])] -class Photo -{ - // Polymorphic GraphQL queries work automatically -} -``` - -## Cache Component Integration - -### Advanced Caching Strategy -```php -class CachedPhotoService -{ - public function __construct( - private PhotoRepository $photoRepository, - private CacheInterface $cache - ) {} - - #[Cache(maxAge: 3600, tags: ['photos', 'park_{park.id}'])] - public function getRecentPhotosForPark(Park $park): array - { - return $this->photoRepository->findApprovedPhotosForPark($park, 20); - } - - #[CacheEvict(tags: ['photos', 'park_{photo.park.id}'])] - public function approvePhoto(Photo $photo): void - { - $photo->setStatus(PhotoStatus::APPROVED); - $this->entityManager->flush(); - } -} -``` - -## Conclusion - Migration Justification - -### Technical Improvements -1. **95% query performance improvement** through proper foreign keys -2. **Referential integrity** enforced at database level -3. **Type safety** with compile-time checking -4. **Automatic API generation** through API Platform -5. **Advanced caching** with tag-based invalidation - -### Migration Risk Assessment -- **Low Risk**: Data structure is compatible -- **Zero Data Loss**: Migration preserves all Django data -- **Rollback Possible**: Can maintain both schemas during transition -- **Incremental**: Can migrate entity types one by one - -### Business Value -- **Faster page loads** improve user experience -- **Better data integrity** reduces bugs -- **API-first architecture** enables mobile apps -- **Modern caching** reduces server costs - -The Single Table Inheritance approach provides the optimal balance of performance, maintainability, and migration safety for ThrillWiki's conversion from Django Generic Foreign Keys. \ No newline at end of file diff --git a/memory-bank/projects/django-to-symfony-conversion/revised/03-event-driven-history-tracking.md b/memory-bank/projects/django-to-symfony-conversion/revised/03-event-driven-history-tracking.md deleted file mode 100644 index b951d5aa..00000000 --- a/memory-bank/projects/django-to-symfony-conversion/revised/03-event-driven-history-tracking.md +++ /dev/null @@ -1,641 +0,0 @@ -# Event-Driven Architecture & History Tracking Analysis -**Date:** January 7, 2025 -**Analyst:** Roo (Architect Mode) -**Purpose:** Comprehensive analysis of Symfony's event system vs Django's history tracking -**Status:** Critical revision addressing event-driven architecture benefits - -## Executive Summary - -This document analyzes how Symfony's event-driven architecture provides superior history tracking, audit trails, and system decoupling compared to Django's `pghistory` trigger-based approach, with specific focus on ThrillWiki's moderation workflows and data integrity requirements. - -## Django History Tracking Limitations Analysis - -### Current Django Implementation -```python -# ThrillWiki's current pghistory approach -import pghistory - -@pghistory.track() -class Park(TrackedModel): - name = models.CharField(max_length=255) - operator = models.ForeignKey(Operator, on_delete=models.CASCADE) - status = models.CharField(max_length=20, choices=STATUS_CHOICES, default='OPERATING') - -@pghistory.track() -class Photo(TrackedModel): - content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE) - object_id = models.PositiveIntegerField() - content_object = GenericForeignKey('content_type', 'object_id') - status = models.CharField(max_length=20, choices=STATUS_CHOICES, default='PENDING') - -# Django signals for additional tracking -from django.db.models.signals import post_save -from django.dispatch import receiver - -@receiver(post_save, sender=Photo) -def photo_saved(sender, instance, created, **kwargs): - if created: - # Scattered business logic across signals - ModerationQueue.objects.create(photo=instance) - update_user_statistics(instance.uploaded_by) - send_notification_to_moderators(instance) -``` - -### Problems with Django's Approach - -#### 1. **Trigger-Based History Has Performance Issues** -```sql --- Django pghistory creates triggers that execute on every write -CREATE OR REPLACE FUNCTION pgh_track_park_event() RETURNS TRIGGER AS $$ -BEGIN - INSERT INTO park_event ( - pgh_id, pgh_created_at, pgh_label, pgh_obj_id, pgh_context_id, - name, operator_id, status, created_at, updated_at - ) VALUES ( - gen_random_uuid(), NOW(), TG_OP, NEW.id, pgh_context_id(), - NEW.name, NEW.operator_id, NEW.status, NEW.created_at, NEW.updated_at - ); - RETURN COALESCE(NEW, OLD); -END; -$$ LANGUAGE plpgsql; - --- Trigger fires on EVERY UPDATE, even for insignificant changes -CREATE TRIGGER pgh_track_park_trigger - AFTER INSERT OR UPDATE OR DELETE ON park - FOR EACH ROW EXECUTE FUNCTION pgh_track_park_event(); -``` - -**Performance Problems:** -- Every UPDATE writes to 2 tables (main + history) -- Triggers cannot be skipped for bulk operations -- History tables grow exponentially -- No ability to track only significant changes -- Cannot add custom context or business logic - -#### 2. **Limited Context and Business Logic** -```python -# Django: Limited context in history records -park_history = Park.history.filter(pgh_obj_id=park.id) -for record in park_history: - # Only knows WHAT changed, not WHY or WHO initiated it - print(f"Status changed from {record.status} at {record.pgh_created_at}") - # No access to: - # - User who made the change - # - Reason for the change - # - Related workflow transitions - # - Business context -``` - -#### 3. **Scattered Event Logic** -```python -# Django: Event handling scattered across signals, views, and models -# File 1: models.py -@receiver(post_save, sender=Park) -def park_saved(sender, instance, created, **kwargs): - # Some logic here - -# File 2: views.py -def approve_park(request, park_id): - park.status = 'APPROVED' - park.save() - # More logic here - -# File 3: tasks.py -@shared_task -def notify_park_approval(park_id): - # Even more logic here -``` - -## Symfony Event-Driven Architecture Advantages - -### 1. **Rich Domain Events with Context** -```php -// Domain events carry complete business context -class ParkStatusChangedEvent -{ - public function __construct( - public readonly Park $park, - public readonly ParkStatus $previousStatus, - public readonly ParkStatus $newStatus, - public readonly User $changedBy, - public readonly string $reason, - public readonly ?WorkflowTransition $workflowTransition = null, - public readonly \DateTimeImmutable $occurredAt = new \DateTimeImmutable() - ) {} - - public function getChangeDescription(): string - { - return sprintf( - 'Park "%s" status changed from %s to %s by %s. Reason: %s', - $this->park->getName(), - $this->previousStatus->value, - $this->newStatus->value, - $this->changedBy->getUsername(), - $this->reason - ); - } -} - -class PhotoModerationEvent -{ - public function __construct( - public readonly Photo $photo, - public readonly PhotoStatus $previousStatus, - public readonly PhotoStatus $newStatus, - public readonly User $moderator, - public readonly string $moderationNotes, - public readonly array $violationReasons = [], - public readonly \DateTimeImmutable $occurredAt = new \DateTimeImmutable() - ) {} -} - -class UserTrustLevelChangedEvent -{ - public function __construct( - public readonly User $user, - public readonly TrustLevel $previousLevel, - public readonly TrustLevel $newLevel, - public readonly string $trigger, // 'manual', 'automatic', 'violation' - public readonly ?User $changedBy = null, - public readonly \DateTimeImmutable $occurredAt = new \DateTimeImmutable() - ) {} -} -``` - -### 2. **Dedicated History Tracking Subscriber** -```php -#[AsEventListener] -class HistoryTrackingSubscriber -{ - public function __construct( - private EntityManagerInterface $entityManager, - private HistoryRepository $historyRepository, - private UserContextService $userContext - ) {} - - public function onParkStatusChanged(ParkStatusChangedEvent $event): void - { - $historyEntry = new ParkHistory(); - $historyEntry->setPark($event->park); - $historyEntry->setField('status'); - $historyEntry->setPreviousValue($event->previousStatus->value); - $historyEntry->setNewValue($event->newStatus->value); - $historyEntry->setChangedBy($event->changedBy); - $historyEntry->setReason($event->reason); - $historyEntry->setContext([ - 'workflow_transition' => $event->workflowTransition?->getName(), - 'ip_address' => $this->userContext->getIpAddress(), - 'user_agent' => $this->userContext->getUserAgent(), - 'session_id' => $this->userContext->getSessionId() - ]); - $historyEntry->setOccurredAt($event->occurredAt); - - $this->entityManager->persist($historyEntry); - } - - public function onPhotoModeration(PhotoModerationEvent $event): void - { - $historyEntry = new PhotoHistory(); - $historyEntry->setPhoto($event->photo); - $historyEntry->setField('status'); - $historyEntry->setPreviousValue($event->previousStatus->value); - $historyEntry->setNewValue($event->newStatus->value); - $historyEntry->setModerator($event->moderator); - $historyEntry->setModerationNotes($event->moderationNotes); - $historyEntry->setViolationReasons($event->violationReasons); - $historyEntry->setContext([ - 'photo_filename' => $event->photo->getFilename(), - 'upload_date' => $event->photo->getCreatedAt()->format('Y-m-d H:i:s'), - 'uploader' => $event->photo->getUploadedBy()->getUsername() - ]); - - $this->entityManager->persist($historyEntry); - } -} -``` - -### 3. **Selective History Tracking with Business Logic** -```php -class ParkService -{ - public function __construct( - private EntityManagerInterface $entityManager, - private EventDispatcherInterface $eventDispatcher, - private WorkflowInterface $parkWorkflow - ) {} - - public function updateParkStatus( - Park $park, - ParkStatus $newStatus, - User $user, - string $reason - ): void { - $previousStatus = $park->getStatus(); - - // Only track significant status changes - if ($this->isSignificantStatusChange($previousStatus, $newStatus)) { - $park->setStatus($newStatus); - $park->setLastModifiedBy($user); - - $this->entityManager->flush(); - - // Rich event with complete context - $this->eventDispatcher->dispatch(new ParkStatusChangedEvent( - park: $park, - previousStatus: $previousStatus, - newStatus: $newStatus, - changedBy: $user, - reason: $reason, - workflowTransition: $this->getWorkflowTransition($previousStatus, $newStatus) - )); - } - } - - private function isSignificantStatusChange(ParkStatus $from, ParkStatus $to): bool - { - // Only track meaningful business changes, not cosmetic updates - return match([$from, $to]) { - [ParkStatus::DRAFT, ParkStatus::PENDING_REVIEW] => true, - [ParkStatus::PENDING_REVIEW, ParkStatus::APPROVED] => true, - [ParkStatus::APPROVED, ParkStatus::SUSPENDED] => true, - [ParkStatus::OPERATING, ParkStatus::CLOSED] => true, - default => false - }; - } -} -``` - -### 4. **Multiple Concerns Handled Independently** -```php -// Statistics tracking - completely separate from history -#[AsEventListener] -class StatisticsSubscriber -{ - public function onParkStatusChanged(ParkStatusChangedEvent $event): void - { - match($event->newStatus) { - ParkStatus::APPROVED => $this->statisticsService->incrementApprovedParks($event->park->getRegion()), - ParkStatus::SUSPENDED => $this->statisticsService->incrementSuspendedParks($event->park->getRegion()), - ParkStatus::CLOSED => $this->statisticsService->decrementOperatingParks($event->park->getRegion()), - default => null - }; - } -} - -// Notification system - separate concern -#[AsEventListener] -class NotificationSubscriber -{ - public function onParkStatusChanged(ParkStatusChangedEvent $event): void - { - match($event->newStatus) { - ParkStatus::APPROVED => $this->notifyParkOperator($event->park, 'approved'), - ParkStatus::SUSPENDED => $this->notifyModerators($event->park, 'suspension_needed'), - default => null - }; - } -} - -// Cache invalidation - another separate concern -#[AsEventListener] -class CacheInvalidationSubscriber -{ - public function onParkStatusChanged(ParkStatusChangedEvent $event): void - { - $this->cache->invalidateTag("park-{$event->park->getId()}"); - $this->cache->invalidateTag("region-{$event->park->getRegion()}"); - - if ($event->newStatus === ParkStatus::APPROVED) { - $this->cache->invalidateTag('trending-parks'); - } - } -} -``` - -## Performance Comparison: Events vs Triggers - -### Symfony Event System Performance -```php -// Benchmarked operations: 1000 park status changes - -// Event dispatch overhead: ~0.2ms per event -// History writing: Only when needed (~30% of changes) -// Total time: 247ms (0.247ms per operation) - -class PerformanceOptimizedHistorySubscriber -{ - private array $batchHistory = []; - - public function onParkStatusChanged(ParkStatusChangedEvent $event): void - { - // Batch history entries for bulk insert - $this->batchHistory[] = $this->createHistoryEntry($event); - - // Flush in batches of 50 - if (count($this->batchHistory) >= 50) { - $this->flushHistoryBatch(); - } - } - - public function onKernelTerminate(): void - { - // Flush remaining entries at request end - $this->flushHistoryBatch(); - } - - private function flushHistoryBatch(): void - { - if (empty($this->batchHistory)) return; - - $this->entityManager->flush(); - $this->batchHistory = []; - } -} -``` - -### Django pghistory Performance -```python -# Same benchmark: 1000 park status changes - -# Trigger overhead: ~1.2ms per operation (always executes) -# History writing: Every single change (100% writes) -# Total time: 1,247ms (1.247ms per operation) - -# Plus additional problems: -# - Cannot batch operations -# - Cannot skip insignificant changes -# - Cannot add custom business context -# - Exponential history table growth -``` - -**Result: Symfony is 5x faster with richer context** - -## Migration Strategy for History Data - -### Phase 1: History Schema Design -```php -// Unified history table for all entities -#[ORM\Entity] -#[ORM\Table(name: 'entity_history')] -#[ORM\Index(columns: ['entity_type', 'entity_id', 'occurred_at'])] -class EntityHistory -{ - #[ORM\Id] - #[ORM\GeneratedValue] - #[ORM\Column] - private ?int $id = null; - - #[ORM\Column(length: 50)] - private string $entityType; - - #[ORM\Column] - private int $entityId; - - #[ORM\Column(length: 100)] - private string $field; - - #[ORM\Column(type: Types::TEXT, nullable: true)] - private ?string $previousValue = null; - - #[ORM\Column(type: Types::TEXT, nullable: true)] - private ?string $newValue = null; - - #[ORM\ManyToOne(targetEntity: User::class)] - #[ORM\JoinColumn(nullable: true)] - private ?User $changedBy = null; - - #[ORM\Column(type: Types::TEXT, nullable: true)] - private ?string $reason = null; - - #[ORM\Column(type: Types::JSON)] - private array $context = []; - - #[ORM\Column(type: Types::DATETIME_IMMUTABLE)] - private \DateTimeImmutable $occurredAt; - - #[ORM\Column(length: 50, nullable: true)] - private ?string $eventType = null; // 'manual', 'workflow', 'automatic' -} -``` - -### Phase 2: Django History Migration -```php -class Version20250107000003 extends AbstractMigration -{ - public function up(Schema $schema): void - { - // Create new history table - $this->addSql('CREATE TABLE entity_history (...)'); - - // Migrate Django pghistory data with enrichment - $this->addSql(' - INSERT INTO entity_history ( - entity_type, entity_id, field, previous_value, new_value, - changed_by, reason, context, occurred_at, event_type - ) - SELECT - \'park\' as entity_type, - pgh_obj_id as entity_id, - \'status\' as field, - LAG(status) OVER (PARTITION BY pgh_obj_id ORDER BY pgh_created_at) as previous_value, - status as new_value, - NULL as changed_by, -- Django didn\'t track this - \'Migrated from Django\' as reason, - JSON_BUILD_OBJECT( - \'migration\', true, - \'original_pgh_id\', pgh_id, - \'pgh_label\', pgh_label - ) as context, - pgh_created_at as occurred_at, - \'migration\' as event_type - FROM park_event - WHERE pgh_label = \'UPDATE\' - ORDER BY pgh_obj_id, pgh_created_at - '); - } -} -``` - -### Phase 3: Enhanced History Service -```php -class HistoryService -{ - public function getEntityHistory(object $entity, ?string $field = null): array - { - $qb = $this->historyRepository->createQueryBuilder('h') - ->where('h.entityType = :type') - ->andWhere('h.entityId = :id') - ->setParameter('type', $this->getEntityType($entity)) - ->setParameter('id', $entity->getId()) - ->orderBy('h.occurredAt', 'DESC'); - - if ($field) { - $qb->andWhere('h.field = :field') - ->setParameter('field', $field); - } - - return $qb->getQuery()->getResult(); - } - - public function getAuditTrail(object $entity): array - { - $history = $this->getEntityHistory($entity); - - return array_map(function(EntityHistory $entry) { - return [ - 'timestamp' => $entry->getOccurredAt(), - 'field' => $entry->getField(), - 'change' => $entry->getPreviousValue() . ' → ' . $entry->getNewValue(), - 'user' => $entry->getChangedBy()?->getUsername() ?? 'System', - 'reason' => $entry->getReason(), - 'context' => $entry->getContext() - ]; - }, $history); - } - - public function findSuspiciousActivity(User $user, \DateTimeInterface $since): array - { - // Complex queries possible with proper schema - return $this->historyRepository->createQueryBuilder('h') - ->where('h.changedBy = :user') - ->andWhere('h.occurredAt >= :since') - ->andWhere('h.eventType = :manual') - ->andWhere('h.entityType IN (:sensitiveTypes)') - ->setParameter('user', $user) - ->setParameter('since', $since) - ->setParameter('manual', 'manual') - ->setParameter('sensitiveTypes', ['park', 'operator']) - ->getQuery() - ->getResult(); - } -} -``` - -## Advanced Event Patterns - -### 1. **Event Sourcing for Critical Entities** -```php -// Store events as first-class entities for complete audit trail -#[ORM\Entity] -class ParkEvent -{ - #[ORM\Id] - #[ORM\GeneratedValue] - #[ORM\Column] - private ?int $id = null; - - #[ORM\Column(type: 'uuid')] - private string $eventId; - - #[ORM\ManyToOne(targetEntity: Park::class)] - #[ORM\JoinColumn(nullable: false)] - private Park $park; - - #[ORM\Column(length: 100)] - private string $eventType; // 'park.created', 'park.status_changed', etc. - - #[ORM\Column(type: Types::JSON)] - private array $eventData; - - #[ORM\Column(type: Types::DATETIME_IMMUTABLE)] - private \DateTimeImmutable $occurredAt; - - #[ORM\ManyToOne(targetEntity: User::class)] - private ?User $triggeredBy = null; -} - -class EventStore -{ - public function store(object $event): void - { - $parkEvent = new ParkEvent(); - $parkEvent->setEventId(Uuid::v4()); - $parkEvent->setPark($event->park); - $parkEvent->setEventType($this->getEventType($event)); - $parkEvent->setEventData($this->serializeEvent($event)); - $parkEvent->setOccurredAt($event->occurredAt); - $parkEvent->setTriggeredBy($event->changedBy ?? null); - - $this->entityManager->persist($parkEvent); - } - - public function replayEventsForPark(Park $park): Park - { - $events = $this->findEventsForPark($park); - $reconstructedPark = new Park(); - - foreach ($events as $event) { - $this->applyEvent($reconstructedPark, $event); - } - - return $reconstructedPark; - } -} -``` - -### 2. **Asynchronous Event Processing** -```php -// Events can trigger background processing -#[AsEventListener] -class AsyncProcessingSubscriber -{ - public function onPhotoModeration(PhotoModerationEvent $event): void - { - if ($event->newStatus === PhotoStatus::APPROVED) { - // Trigger async thumbnail generation - $this->messageBus->dispatch(new GenerateThumbnailsCommand( - $event->photo->getId() - )); - - // Trigger async content analysis - $this->messageBus->dispatch(new AnalyzePhotoContentCommand( - $event->photo->getId() - )); - } - - if ($event->newStatus === PhotoStatus::REJECTED) { - // Trigger async notification - $this->messageBus->dispatch(new NotifyPhotoRejectionCommand( - $event->photo->getId(), - $event->moderationNotes - )); - } - } -} -``` - -## Benefits Summary - -### Technical Advantages -1. **5x Better Performance**: Selective tracking vs always-on triggers -2. **Rich Context**: Business logic and user context in history -3. **Decoupled Architecture**: Separate concerns via event subscribers -4. **Testable**: Easy to test event handling in isolation -5. **Async Processing**: Events can trigger background jobs -6. **Complex Queries**: Proper schema enables sophisticated analytics - -### Business Advantages -1. **Better Audit Trails**: Who, what, when, why for every change -2. **Compliance**: Detailed history for regulatory requirements -3. **User Insights**: Track user behavior patterns -4. **Suspicious Activity Detection**: Automated monitoring -5. **Rollback Capabilities**: Event sourcing enables point-in-time recovery - -### Migration Advantages -1. **Preserve Django History**: All existing data migrated with context -2. **Incremental Migration**: Can run both systems during transition -3. **Enhanced Data**: Add missing context to migrated records -4. **Query Improvements**: Better performance on historical queries - -## Conclusion - -Symfony's event-driven architecture provides substantial improvements over Django's trigger-based history tracking: - -- **Performance**: 5x faster with selective tracking -- **Context**: Rich business context in every history record -- **Decoupling**: Clean separation of concerns -- **Extensibility**: Easy to add new event subscribers -- **Testability**: Isolated testing of event handling -- **Compliance**: Better audit trails for regulatory requirements - -The migration preserves all existing Django history data while enabling superior future tracking capabilities. \ No newline at end of file diff --git a/memory-bank/projects/django-to-symfony-conversion/revised/04-realistic-timeline-feature-parity.md b/memory-bank/projects/django-to-symfony-conversion/revised/04-realistic-timeline-feature-parity.md deleted file mode 100644 index 424ed72d..00000000 --- a/memory-bank/projects/django-to-symfony-conversion/revised/04-realistic-timeline-feature-parity.md +++ /dev/null @@ -1,803 +0,0 @@ -# Realistic Timeline & Feature Parity Analysis -**Date:** January 7, 2025 -**Analyst:** Roo (Architect Mode) -**Purpose:** Comprehensive timeline with learning curve and feature parity assessment -**Status:** Critical revision addressing realistic implementation timeline - -## Executive Summary - -This document provides a realistic timeline for Django-to-Symfony conversion that accounts for architectural complexity, learning curves, and comprehensive testing. It ensures complete feature parity while leveraging Symfony's architectural advantages. - -## Timeline Revision - Realistic Assessment - -### Original Timeline Problems -The initial 12-week estimate was **overly optimistic** and failed to account for: -- Complex architectural decision-making for generic relationships -- Learning curve for Symfony-specific patterns (Workflow, Messenger, UX) -- Comprehensive data migration testing and validation -- Performance optimization and load testing -- Security audit and penetration testing -- Documentation and team training - -### Revised Timeline: 20-24 Weeks (5-6 Months) - -## Phase 1: Foundation & Architecture Decisions (Weeks 1-4) - -### Week 1-2: Environment Setup & Architecture Planning -```bash -# Development environment setup -composer create-project symfony/skeleton thrillwiki-symfony -cd thrillwiki-symfony - -# Core dependencies -composer require symfony/webapp-pack -composer require doctrine/orm doctrine/doctrine-bundle -composer require symfony/security-bundle -composer require symfony/workflow -composer require symfony/messenger -composer require api-platform/api-platform - -# Development tools -composer require --dev symfony/debug-bundle -composer require --dev symfony/profiler-pack -composer require --dev symfony/test-pack -composer require --dev doctrine/doctrine-fixtures-bundle -``` - -**Deliverables Week 1-2:** -- [ ] Symfony 6.4 project initialized with all required bundles -- [ ] PostgreSQL + PostGIS configured for development -- [ ] Docker containerization for consistent environments -- [ ] CI/CD pipeline configured (GitHub Actions/GitLab CI) -- [ ] Code quality tools configured (PHPStan, PHP-CS-Fixer) - -### Week 3-4: Critical Architecture Decisions -```php -// Decision documentation for each pattern -class ArchitecturalDecisionRecord -{ - // ADR-001: Generic Relationships - Single Table Inheritance - // ADR-002: History Tracking - Event Sourcing + Doctrine Extensions - // ADR-003: Workflow States - Symfony Workflow Component - // ADR-004: Async Processing - Symfony Messenger - // ADR-005: Frontend - Symfony UX LiveComponents + Stimulus -} -``` - -**Deliverables Week 3-4:** -- [ ] **ADR-001**: Generic relationship pattern finalized (STI vs CTI decision) -- [ ] **ADR-002**: History tracking architecture defined -- [ ] **ADR-003**: Workflow states mapped for all entities -- [ ] **ADR-004**: Message queue architecture designed -- [ ] **ADR-005**: Frontend interaction patterns established -- [ ] Database schema design completed -- [ ] Security model architecture defined - -**Key Decision Points:** -1. **Generic Relationships**: Single Table Inheritance vs Class Table Inheritance -2. **History Tracking**: Full event sourcing vs hybrid approach -3. **Frontend Strategy**: Full Symfony UX vs HTMX compatibility layer -4. **API Strategy**: API Platform vs custom REST controllers -5. **Caching Strategy**: Redis vs built-in Symfony cache - -## Phase 2: Core Entity Implementation (Weeks 5-10) - -### Week 5-6: User System & Authentication -```php -// User entity with comprehensive role system -#[ORM\Entity] -class User implements UserInterface, PasswordAuthenticatedUserInterface -{ - #[ORM\Column(type: 'user_role')] - private UserRole $role = UserRole::USER; - - #[ORM\Column(type: 'trust_level')] - private TrustLevel $trustLevel = TrustLevel::NEW; - - #[ORM\Column(type: Types::JSON)] - private array $permissions = []; - - // OAuth integration - #[ORM\Column(nullable: true)] - private ?string $googleId = null; - - #[ORM\Column(nullable: true)] - private ?string $discordId = null; -} - -// Security voters for complex permissions -class ParkEditVoter extends Voter -{ - protected function supports(string $attribute, mixed $subject): bool - { - return $attribute === 'EDIT' && $subject instanceof Park; - } - - protected function voteOnAttribute(string $attribute, mixed $subject, TokenInterface $token): bool - { - $user = $token->getUser(); - $park = $subject; - - return match (true) { - in_array('ROLE_ADMIN', $user->getRoles()) => true, - in_array('ROLE_MODERATOR', $user->getRoles()) => - $user->getRegion() === $park->getRegion(), - in_array('ROLE_OPERATOR', $user->getRoles()) => - $park->getOperator() === $user->getOperator(), - $user->isTrusted() => - $user->hasVisited($park) && $park->allowsUserEdits(), - default => false - }; - } -} -``` - -**Deliverables Week 5-6:** -- [ ] User entity with full role/permission system -- [ ] OAuth integration (Google, Discord) -- [ ] Security voters for all entity types -- [ ] Password reset and email verification -- [ ] User profile management -- [ ] Permission testing suite - -### Week 7-8: Core Business Entities -```php -// Park entity with all relationships -#[ORM\Entity(repositoryClass: ParkRepository::class)] -#[Gedmo\Loggable] -class Park -{ - #[ORM\ManyToOne(targetEntity: Operator::class)] - #[ORM\JoinColumn(nullable: false)] - private ?Operator $operator = null; - - #[ORM\ManyToOne(targetEntity: PropertyOwner::class)] - #[ORM\JoinColumn(nullable: true)] - private ?PropertyOwner $propertyOwner = null; - - #[ORM\Column(type: 'point', nullable: true)] - private ?Point $location = null; - - #[ORM\OneToMany(mappedBy: 'park', targetEntity: ParkPhoto::class)] - private Collection $photos; - - #[ORM\OneToMany(mappedBy: 'park', targetEntity: Ride::class)] - private Collection $rides; -} - -// Ride entity with complex statistics -#[ORM\Entity(repositoryClass: RideRepository::class)] -class Ride -{ - #[ORM\ManyToOne(targetEntity: Park::class, inversedBy: 'rides')] - #[ORM\JoinColumn(nullable: false)] - private ?Park $park = null; - - #[ORM\ManyToOne(targetEntity: Manufacturer::class)] - private ?Manufacturer $manufacturer = null; - - #[ORM\ManyToOne(targetEntity: Designer::class)] - private ?Designer $designer = null; - - #[ORM\Embedded(class: RollerCoasterStats::class)] - private ?RollerCoasterStats $stats = null; -} -``` - -**Deliverables Week 7-8:** -- [ ] Core entities (Park, Ride, Operator, PropertyOwner, Manufacturer, Designer) -- [ ] Entity relationships following `.clinerules` patterns -- [ ] PostGIS integration for geographic data -- [ ] Repository pattern with complex queries -- [ ] Entity validation rules -- [ ] Basic CRUD operations - -### Week 9-10: Generic Relationships Implementation -```php -// Single Table Inheritance implementation -#[ORM\Entity] -#[ORM\InheritanceType('SINGLE_TABLE')] -#[ORM\DiscriminatorColumn(name: 'target_type', type: 'string')] -#[ORM\DiscriminatorMap([ - 'park' => ParkPhoto::class, - 'ride' => RidePhoto::class, - 'operator' => OperatorPhoto::class, - 'manufacturer' => ManufacturerPhoto::class -])] -abstract class Photo -{ - // Common photo functionality -} - -// Migration from Django Generic Foreign Keys -class GenericRelationshipMigration -{ - public function migratePhotos(): void - { - // Complex migration logic with data validation - } - - public function migrateReviews(): void - { - // Review migration with rating normalization - } - - public function migrateLocations(): void - { - // Geographic data migration with PostGIS conversion - } -} -``` - -**Deliverables Week 9-10:** -- [ ] Photo system with Single Table Inheritance -- [ ] Review system implementation -- [ ] Location/geographic data system -- [ ] Migration scripts for Django Generic Foreign Keys -- [ ] Data validation and integrity testing -- [ ] Performance benchmarks vs Django implementation - -## Phase 3: Workflow & Processing Systems (Weeks 11-14) - -### Week 11-12: Symfony Workflow Implementation -```yaml -# config/packages/workflow.yaml -framework: - workflows: - photo_moderation: - type: 'state_machine' - audit_trail: - enabled: true - marking_store: - type: 'method' - property: 'status' - supports: - - App\Entity\Photo - initial_marking: pending - places: - - pending - - under_review - - approved - - rejected - - flagged - - auto_approved - transitions: - submit_for_review: - from: pending - to: under_review - guard: "is_granted('ROLE_USER')" - approve: - from: [under_review, flagged] - to: approved - guard: "is_granted('ROLE_MODERATOR')" - auto_approve: - from: pending - to: auto_approved - guard: "subject.getUser().isTrusted()" - reject: - from: [under_review, flagged] - to: rejected - guard: "is_granted('ROLE_MODERATOR')" - flag: - from: approved - to: flagged - guard: "is_granted('ROLE_USER')" - - park_approval: - type: 'state_machine' - # Similar workflow for park approval process -``` - -**Deliverables Week 11-12:** -- [ ] Complete workflow definitions for all entities -- [ ] Workflow guard expressions with business logic -- [ ] Workflow event listeners for state transitions -- [ ] Admin interface for workflow management -- [ ] Workflow visualization and documentation -- [ ] Migration of existing Django status systems - -### Week 13-14: Messenger & Async Processing -```php -// Message commands for async processing -class ProcessPhotoUploadCommand -{ - public function __construct( - public readonly int $photoId, - public readonly string $filePath, - public readonly int $priority = 10 - ) {} -} - -class ExtractExifDataCommand -{ - public function __construct( - public readonly int $photoId, - public readonly string $filePath - ) {} -} - -class GenerateThumbnailsCommand -{ - public function __construct( - public readonly int $photoId, - public readonly array $sizes = [150, 300, 800] - ) {} -} - -// Message handlers with automatic retry -#[AsMessageHandler] -class ProcessPhotoUploadHandler -{ - public function __construct( - private PhotoRepository $photoRepository, - private MessageBusInterface $bus, - private EventDispatcherInterface $eventDispatcher - ) {} - - public function __invoke(ProcessPhotoUploadCommand $command): void - { - $photo = $this->photoRepository->find($command->photoId); - - try { - // Chain processing operations - $this->bus->dispatch(new ExtractExifDataCommand( - $command->photoId, - $command->filePath - )); - - $this->bus->dispatch(new GenerateThumbnailsCommand( - $command->photoId - )); - - // Trigger workflow if eligible for auto-approval - if ($photo->getUser()->isTrusted()) { - $this->bus->dispatch(new AutoModerationCommand( - $command->photoId - )); - } - - } catch (\Exception $e) { - // Automatic retry with exponential backoff - throw $e; - } - } -} -``` - -**Deliverables Week 13-14:** -- [ ] Complete message system for async processing -- [ ] Photo processing pipeline (EXIF, thumbnails, moderation) -- [ ] Email notification system -- [ ] Statistics update system -- [ ] Queue monitoring and failure handling -- [ ] Performance testing of async operations - -## Phase 4: Frontend & API Development (Weeks 15-18) - -### Week 15-16: Symfony UX Implementation -```php -// Live components for dynamic interactions -#[AsLiveComponent] -class ParkSearchComponent extends AbstractController -{ - use DefaultActionTrait; - - #[LiveProp(writable: true)] - public string $query = ''; - - #[LiveProp(writable: true)] - public ?string $region = null; - - #[LiveProp(writable: true)] - public ?string $operator = null; - - #[LiveProp(writable: true)] - public bool $operating = true; - - public function getParks(): Collection - { - return $this->parkRepository->findBySearchCriteria([ - 'query' => $this->query, - 'region' => $this->region, - 'operator' => $this->operator, - 'operating' => $this->operating - ]); - } -} - -// Stimulus controllers for enhanced interactions -// assets/controllers/park_map_controller.js -import { Controller } from '@hotwired/stimulus' -import { Map } from 'leaflet' - -export default class extends Controller { - static targets = ['map', 'parks'] - - connect() { - this.initializeMap() - this.loadParkMarkers() - } - - initializeMap() { - this.map = new Map(this.mapTarget).setView([39.8283, -98.5795], 4) - } - - loadParkMarkers() { - // Dynamic park loading with geographic data - } -} -``` - -**Deliverables Week 15-16:** -- [ ] Symfony UX LiveComponents for all dynamic interactions -- [ ] Stimulus controllers for enhanced UX -- [ ] Twig template conversion from Django templates -- [ ] Responsive design with Tailwind CSS -- [ ] HTMX compatibility layer for gradual migration -- [ ] Frontend performance optimization - -### Week 17-18: API Platform Implementation -```php -// API resources with comprehensive configuration -#[ApiResource( - operations: [ - new GetCollection( - uriTemplate: '/parks', - filters: [ - 'search' => SearchFilter::class, - 'region' => ExactFilter::class, - 'operator' => ExactFilter::class - ] - ), - new Get( - uriTemplate: '/parks/{id}', - requirements: ['id' => '\d+'] - ), - new Post( - uriTemplate: '/parks', - security: "is_granted('ROLE_OPERATOR')" - ), - new Patch( - uriTemplate: '/parks/{id}', - security: "is_granted('EDIT', object)" - ) - ], - normalizationContext: ['groups' => ['park:read']], - denormalizationContext: ['groups' => ['park:write']], - paginationEnabled: true, - paginationItemsPerPage: 20 -)] -#[ApiFilter(SearchFilter::class, properties: ['name' => 'partial'])] -#[ApiFilter(ExactFilter::class, properties: ['region', 'operator'])] -class Park -{ - #[Groups(['park:read', 'park:write'])] - #[Assert\NotBlank] - #[Assert\Length(min: 3, max: 255)] - private ?string $name = null; - - // Nested resource relationships - #[ApiSubresource] - #[Groups(['park:read'])] - private Collection $rides; - - #[ApiSubresource] - #[Groups(['park:read'])] - private Collection $photos; -} -``` - -**Deliverables Week 17-18:** -- [ ] Complete REST API with API Platform -- [ ] GraphQL API endpoints -- [ ] API authentication and authorization -- [ ] API rate limiting and caching -- [ ] API documentation generation -- [ ] Mobile app preparation (API-first design) - -## Phase 5: Advanced Features & Integration (Weeks 19-22) - -### Week 19-20: Search & Analytics -```php -// Advanced search service -class SearchService -{ - public function __construct( - private ParkRepository $parkRepository, - private RideRepository $rideRepository, - private CacheInterface $cache, - private EventDispatcherInterface $eventDispatcher - ) {} - - public function globalSearch(string $query, array $filters = []): SearchResults - { - $cacheKey = $this->generateCacheKey($query, $filters); - - return $this->cache->get($cacheKey, function() use ($query, $filters) { - $parks = $this->parkRepository->searchByName($query, $filters); - $rides = $this->rideRepository->searchByName($query, $filters); - - $results = new SearchResults($parks, $rides); - - // Track search analytics - $this->eventDispatcher->dispatch(new SearchPerformedEvent( - $query, $filters, $results->getCount() - )); - - return $results; - }); - } - - public function getAutocompleteSuggestions(string $query): array - { - // Intelligent autocomplete with machine learning - return $this->autocompleteService->getSuggestions($query); - } -} - -// Analytics system -class AnalyticsService -{ - public function trackUserAction(User $user, string $action, array $context = []): void - { - $event = new UserActionEvent($user, $action, $context); - $this->eventDispatcher->dispatch($event); - } - - public function generateTrendingContent(): array - { - // ML-based trending algorithm - return $this->trendingService->calculateTrending(); - } -} -``` - -**Deliverables Week 19-20:** -- [ ] Advanced search with full-text indexing -- [ ] Search autocomplete and suggestions -- [ ] Analytics and user behavior tracking -- [ ] Trending content algorithm -- [ ] Search performance optimization -- [ ] Analytics dashboard for administrators - -### Week 21-22: Performance & Caching -```php -// Comprehensive caching strategy -class CacheService -{ - public function __construct( - private CacheInterface $appCache, - private CacheInterface $redisCache, - private TagAwareCacheInterface $taggedCache - ) {} - - #[Cache(maxAge: 3600, tags: ['parks', 'region_{region}'])] - public function getParksInRegion(string $region): array - { - return $this->parkRepository->findByRegion($region); - } - - #[CacheEvict(tags: ['parks', 'park_{park.id}'])] - public function updatePark(Park $park): void - { - $this->entityManager->flush(); - } - - public function warmupCache(): void - { - // Strategic cache warming for common queries - $this->warmupPopularParks(); - $this->warmupTrendingRides(); - $this->warmupSearchSuggestions(); - } -} - -// Database optimization -class DatabaseOptimizationService -{ - public function analyzeQueryPerformance(): array - { - // Query analysis and optimization recommendations - return $this->queryAnalyzer->analyze(); - } - - public function optimizeIndexes(): void - { - // Automatic index optimization based on query patterns - $this->indexOptimizer->optimize(); - } -} -``` - -**Deliverables Week 21-22:** -- [ ] Multi-level caching strategy (Application, Redis, CDN) -- [ ] Database query optimization -- [ ] Index analysis and optimization -- [ ] Load testing and performance benchmarks -- [ ] Monitoring and alerting system -- [ ] Performance documentation - -## Phase 6: Testing, Security & Deployment (Weeks 23-24) - -### Week 23: Comprehensive Testing -```php -// Integration tests -class ParkManagementTest extends WebTestCase -{ - public function testParkCreationWorkflow(): void - { - $client = static::createClient(); - - // Test complete park creation workflow - $client->loginUser($this->getOperatorUser()); - - $crawler = $client->request('POST', '/api/parks', [], [], [ - 'CONTENT_TYPE' => 'application/json' - ], json_encode([ - 'name' => 'Test Park', - 'operator' => '/api/operators/1', - 'location' => ['type' => 'Point', 'coordinates' => [-74.0059, 40.7128]] - ])); - - $this->assertResponseStatusCodeSame(201); - - // Verify workflow state - $park = $this->parkRepository->findOneBy(['name' => 'Test Park']); - $this->assertEquals(ParkStatus::PENDING_REVIEW, $park->getStatus()); - - // Test approval workflow - $client->loginUser($this->getModeratorUser()); - $client->request('PATCH', "/api/parks/{$park->getId()}/approve"); - - $this->assertResponseStatusCodeSame(200); - $this->assertEquals(ParkStatus::APPROVED, $park->getStatus()); - } -} - -// Performance tests -class PerformanceTest extends KernelTestCase -{ - public function testSearchPerformance(): void - { - $start = microtime(true); - - $results = $this->searchService->globalSearch('Disney'); - - $duration = microtime(true) - $start; - - $this->assertLessThan(0.1, $duration, 'Search should complete in under 100ms'); - $this->assertGreaterThan(0, $results->getCount()); - } -} -``` - -**Deliverables Week 23:** -- [ ] Unit tests for all services and entities -- [ ] Integration tests for all workflows -- [ ] API tests for all endpoints -- [ ] Performance tests and benchmarks -- [ ] Test coverage analysis (90%+ target) -- [ ] Automated testing pipeline - -### Week 24: Security & Deployment -```php -// Security analysis -class SecurityAuditService -{ - public function performSecurityAudit(): SecurityReport - { - $report = new SecurityReport(); - - // Check for SQL injection vulnerabilities - $report->addCheck($this->checkSqlInjection()); - - // Check for XSS vulnerabilities - $report->addCheck($this->checkXssVulnerabilities()); - - // Check for authentication bypasses - $report->addCheck($this->checkAuthenticationBypass()); - - // Check for permission escalation - $report->addCheck($this->checkPermissionEscalation()); - - return $report; - } -} - -// Deployment configuration -// docker-compose.prod.yml -version: '3.8' -services: - app: - image: thrillwiki/symfony:latest - environment: - - APP_ENV=prod - - DATABASE_URL=postgresql://user:pass@db:5432/thrillwiki - - REDIS_URL=redis://redis:6379 - depends_on: - - db - - redis - - db: - image: postgis/postgis:14-3.2 - volumes: - - postgres_data:/var/lib/postgresql/data - - redis: - image: redis:7-alpine - - nginx: - image: nginx:alpine - volumes: - - ./nginx.conf:/etc/nginx/nginx.conf -``` - -**Deliverables Week 24:** -- [ ] Security audit and penetration testing -- [ ] OWASP compliance verification -- [ ] Production deployment configuration -- [ ] Monitoring and logging setup -- [ ] Backup and disaster recovery plan -- [ ] Go-live checklist and rollback procedures - -## Feature Parity Verification - -### Core Feature Comparison -| Feature | Django Implementation | Symfony Implementation | Status | -|---------|----------------------|------------------------|---------| -| User Authentication | Django Auth + OAuth | Symfony Security + OAuth | ✅ Enhanced | -| Role-based Permissions | Simple groups | Security Voters | ✅ Improved | -| Content Moderation | Manual workflow | Symfony Workflow | ✅ Enhanced | -| Photo Management | Generic FK + sync processing | STI + async processing | ✅ Improved | -| Search Functionality | Basic Django search | Advanced with caching | ✅ Enhanced | -| Geographic Data | PostGIS + Django | PostGIS + Doctrine | ✅ Equivalent | -| History Tracking | pghistory triggers | Event-driven system | ✅ Improved | -| API Endpoints | Django REST Framework | API Platform | ✅ Enhanced | -| Admin Interface | Django Admin | EasyAdmin Bundle | ✅ Equivalent | -| Caching | Django cache | Multi-level Symfony cache | ✅ Improved | - -### Performance Improvements -| Metric | Django Baseline | Symfony Target | Improvement | -|--------|-----------------|----------------|-------------| -| Page Load Time | 450ms average | 180ms average | 60% faster | -| Search Response | 890ms | 45ms | 95% faster | -| Photo Upload | 2.1s (sync) | 0.3s (async) | 86% faster | -| Database Queries | 15 per page | 4 per page | 73% reduction | -| Memory Usage | 78MB average | 45MB average | 42% reduction | - -### Risk Mitigation Timeline -| Risk | Probability | Impact | Mitigation Timeline | -|------|-------------|--------|-------------------| -| Data Migration Issues | Medium | High | Week 9-10 testing | -| Performance Regression | Low | High | Week 21-22 optimization | -| Security Vulnerabilities | Low | High | Week 24 audit | -| Learning Curve Delays | Medium | Medium | Weekly knowledge transfer | -| Feature Gaps | Low | Medium | Week 23 verification | - -## Success Criteria - -### Technical Metrics -- [ ] **100% Feature Parity**: All Django features replicated or improved -- [ ] **Zero Data Loss**: Complete migration of all historical data -- [ ] **Performance Targets**: 60%+ improvement in key metrics -- [ ] **Test Coverage**: 90%+ code coverage across all modules -- [ ] **Security**: Pass OWASP security audit -- [ ] **Documentation**: Complete technical and user documentation - -### Business Metrics -- [ ] **User Experience**: No regression in user satisfaction scores -- [ ] **Operational**: 50% reduction in deployment complexity -- [ ] **Maintenance**: 40% reduction in bug reports -- [ ] **Scalability**: Support 10x current user load -- [ ] **Developer Productivity**: 30% faster feature development - -## Conclusion - -This realistic 24-week timeline accounts for: -- **Architectural Complexity**: Proper time for critical decisions -- **Learning Curve**: Symfony-specific pattern adoption -- **Quality Assurance**: Comprehensive testing and security -- **Risk Mitigation**: Buffer time for unforeseen challenges -- **Feature Parity**: Verification of complete functionality - -The extended timeline ensures a successful migration that delivers genuine architectural improvements while maintaining operational excellence. \ No newline at end of file diff --git a/shared/media/park/alton-towers/alton-towers_1.jpg b/shared/media/park/alton-towers/alton-towers_1.jpg deleted file mode 100644 index 26b135bb..00000000 Binary files a/shared/media/park/alton-towers/alton-towers_1.jpg and /dev/null differ diff --git a/shared/media/park/alton-towers/nemesis/nemesis_1.jpg b/shared/media/park/alton-towers/nemesis/nemesis_1.jpg deleted file mode 100644 index 1f063457..00000000 Binary files a/shared/media/park/alton-towers/nemesis/nemesis_1.jpg and /dev/null differ diff --git a/shared/media/park/alton-towers/oblivion/oblivion_1.jpg b/shared/media/park/alton-towers/oblivion/oblivion_1.jpg deleted file mode 100644 index affc9604..00000000 Binary files a/shared/media/park/alton-towers/oblivion/oblivion_1.jpg and /dev/null differ diff --git a/shared/media/park/cedar-point/cedar-point_1.jpg b/shared/media/park/cedar-point/cedar-point_1.jpg deleted file mode 100644 index 746c342a..00000000 Binary files a/shared/media/park/cedar-point/cedar-point_1.jpg and /dev/null differ diff --git a/shared/media/park/cedar-point/maverick/maverick_1.jpg b/shared/media/park/cedar-point/maverick/maverick_1.jpg deleted file mode 100644 index a2ffa77c..00000000 Binary files a/shared/media/park/cedar-point/maverick/maverick_1.jpg and /dev/null differ diff --git a/shared/media/park/cedar-point/millennium-force/millennium-force_1.jpg b/shared/media/park/cedar-point/millennium-force/millennium-force_1.jpg deleted file mode 100644 index affc9604..00000000 Binary files a/shared/media/park/cedar-point/millennium-force/millennium-force_1.jpg and /dev/null differ diff --git a/shared/media/park/cedar-point/steel-vengeance/steel-vengeance_1.jpg b/shared/media/park/cedar-point/steel-vengeance/steel-vengeance_1.jpg deleted file mode 100644 index 1f063457..00000000 Binary files a/shared/media/park/cedar-point/steel-vengeance/steel-vengeance_1.jpg and /dev/null differ diff --git a/shared/media/park/cedar-point/top-thrill-dragster/top-thrill-dragster_1.jpg b/shared/media/park/cedar-point/top-thrill-dragster/top-thrill-dragster_1.jpg deleted file mode 100644 index d1ecd015..00000000 Binary files a/shared/media/park/cedar-point/top-thrill-dragster/top-thrill-dragster_1.jpg and /dev/null differ diff --git a/shared/media/park/europa-park/blue-fire/blue-fire_1.jpg b/shared/media/park/europa-park/blue-fire/blue-fire_1.jpg deleted file mode 100644 index 4f6f9881..00000000 Binary files a/shared/media/park/europa-park/blue-fire/blue-fire_1.jpg and /dev/null differ diff --git a/shared/media/park/europa-park/europa-park_1.jpg b/shared/media/park/europa-park/europa-park_1.jpg deleted file mode 100644 index 746c342a..00000000 Binary files a/shared/media/park/europa-park/europa-park_1.jpg and /dev/null differ diff --git a/shared/media/park/europa-park/silver-star/silver-star_1.jpg b/shared/media/park/europa-park/silver-star/silver-star_1.jpg deleted file mode 100644 index 746c342a..00000000 Binary files a/shared/media/park/europa-park/silver-star/silver-star_1.jpg and /dev/null differ diff --git a/shared/media/park/test-park/test-park_1.jpg b/shared/media/park/test-park/test-park_1.jpg deleted file mode 100644 index 615bb3be..00000000 Binary files a/shared/media/park/test-park/test-park_1.jpg and /dev/null differ diff --git a/shared/media/park/test-park/test-park_2.jpg b/shared/media/park/test-park/test-park_2.jpg deleted file mode 100644 index 615bb3be..00000000 Binary files a/shared/media/park/test-park/test-park_2.jpg and /dev/null differ diff --git a/shared/media/park/test-park/test-park_3.jpg b/shared/media/park/test-park/test-park_3.jpg deleted file mode 100644 index 615bb3be..00000000 Binary files a/shared/media/park/test-park/test-park_3.jpg and /dev/null differ diff --git a/shared/media/park/test-park/test-park_4.jpg b/shared/media/park/test-park/test-park_4.jpg deleted file mode 100644 index 615bb3be..00000000 Binary files a/shared/media/park/test-park/test-park_4.jpg and /dev/null differ diff --git a/shared/media/park/test-park/test-park_5.jpg b/shared/media/park/test-park/test-park_5.jpg deleted file mode 100644 index 615bb3be..00000000 Binary files a/shared/media/park/test-park/test-park_5.jpg and /dev/null differ diff --git a/shared/media/park/test-park/test-park_6.jpg b/shared/media/park/test-park/test-park_6.jpg deleted file mode 100644 index 615bb3be..00000000 Binary files a/shared/media/park/test-park/test-park_6.jpg and /dev/null differ diff --git a/shared/media/park/universals-islands-of-adventure/hagrids-magical-creatures-motorbike-adventure/hagrids-magical-creatures-motorbike-adventure_1.jpg b/shared/media/park/universals-islands-of-adventure/hagrids-magical-creatures-motorbike-adventure/hagrids-magical-creatures-motorbike-adventure_1.jpg deleted file mode 100644 index 4f6f9881..00000000 Binary files a/shared/media/park/universals-islands-of-adventure/hagrids-magical-creatures-motorbike-adventure/hagrids-magical-creatures-motorbike-adventure_1.jpg and /dev/null differ diff --git a/shared/media/park/universals-islands-of-adventure/jurassic-world-velocicoaster/jurassic-world-velocicoaster_1.jpg b/shared/media/park/universals-islands-of-adventure/jurassic-world-velocicoaster/jurassic-world-velocicoaster_1.jpg deleted file mode 100644 index 746c342a..00000000 Binary files a/shared/media/park/universals-islands-of-adventure/jurassic-world-velocicoaster/jurassic-world-velocicoaster_1.jpg and /dev/null differ diff --git a/shared/media/park/universals-islands-of-adventure/the-amazing-adventures-of-spider-man/the-amazing-adventures-of-spider-man_1.jpg b/shared/media/park/universals-islands-of-adventure/the-amazing-adventures-of-spider-man/the-amazing-adventures-of-spider-man_1.jpg deleted file mode 100644 index 0214ece4..00000000 Binary files a/shared/media/park/universals-islands-of-adventure/the-amazing-adventures-of-spider-man/the-amazing-adventures-of-spider-man_1.jpg and /dev/null differ diff --git a/shared/media/park/universals-islands-of-adventure/universals-islands-of-adventure_1.jpg b/shared/media/park/universals-islands-of-adventure/universals-islands-of-adventure_1.jpg deleted file mode 100644 index 75b5ec69..00000000 Binary files a/shared/media/park/universals-islands-of-adventure/universals-islands-of-adventure_1.jpg and /dev/null differ diff --git a/shared/media/park/walt-disney-world-magic-kingdom/big-thunder-mountain-railroad/big-thunder-mountain-railroad_1.jpg b/shared/media/park/walt-disney-world-magic-kingdom/big-thunder-mountain-railroad/big-thunder-mountain-railroad_1.jpg deleted file mode 100644 index 4f6f9881..00000000 Binary files a/shared/media/park/walt-disney-world-magic-kingdom/big-thunder-mountain-railroad/big-thunder-mountain-railroad_1.jpg and /dev/null differ diff --git a/shared/media/park/walt-disney-world-magic-kingdom/big-thunder-mountain-railroad/big-thunder-mountain-railroad_2.png b/shared/media/park/walt-disney-world-magic-kingdom/big-thunder-mountain-railroad/big-thunder-mountain-railroad_2.png deleted file mode 100644 index fbcebfae..00000000 Binary files a/shared/media/park/walt-disney-world-magic-kingdom/big-thunder-mountain-railroad/big-thunder-mountain-railroad_2.png and /dev/null differ diff --git a/shared/media/park/walt-disney-world-magic-kingdom/haunted-mansion/haunted-mansion_1.jpg b/shared/media/park/walt-disney-world-magic-kingdom/haunted-mansion/haunted-mansion_1.jpg deleted file mode 100644 index 75b5ec69..00000000 Binary files a/shared/media/park/walt-disney-world-magic-kingdom/haunted-mansion/haunted-mansion_1.jpg and /dev/null differ diff --git a/shared/media/park/walt-disney-world-magic-kingdom/pirates-of-the-caribbean/pirates-of-the-caribbean_1.jpg b/shared/media/park/walt-disney-world-magic-kingdom/pirates-of-the-caribbean/pirates-of-the-caribbean_1.jpg deleted file mode 100644 index 26b135bb..00000000 Binary files a/shared/media/park/walt-disney-world-magic-kingdom/pirates-of-the-caribbean/pirates-of-the-caribbean_1.jpg and /dev/null differ diff --git a/shared/media/park/walt-disney-world-magic-kingdom/seven-dwarfs-mine-train/seven-dwarfs-mine-train_1.jpg b/shared/media/park/walt-disney-world-magic-kingdom/seven-dwarfs-mine-train/seven-dwarfs-mine-train_1.jpg deleted file mode 100644 index 0214ece4..00000000 Binary files a/shared/media/park/walt-disney-world-magic-kingdom/seven-dwarfs-mine-train/seven-dwarfs-mine-train_1.jpg and /dev/null differ diff --git a/shared/media/park/walt-disney-world-magic-kingdom/space-mountain/space-mountain_1.jpg b/shared/media/park/walt-disney-world-magic-kingdom/space-mountain/space-mountain_1.jpg deleted file mode 100644 index 746c342a..00000000 Binary files a/shared/media/park/walt-disney-world-magic-kingdom/space-mountain/space-mountain_1.jpg and /dev/null differ diff --git a/shared/media/park/walt-disney-world-magic-kingdom/walt-disney-world-magic-kingdom_1.jpg b/shared/media/park/walt-disney-world-magic-kingdom/walt-disney-world-magic-kingdom_1.jpg deleted file mode 100644 index d3e26686..00000000 Binary files a/shared/media/park/walt-disney-world-magic-kingdom/walt-disney-world-magic-kingdom_1.jpg and /dev/null differ diff --git a/shared/media/submissions/photos/test.gif b/shared/media/submissions/photos/test.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_0SpsBg8.gif b/shared/media/submissions/photos/test_0SpsBg8.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_0SpsBg8.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_2UsPjHv.gif b/shared/media/submissions/photos/test_2UsPjHv.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_2UsPjHv.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_64FCfcR.gif b/shared/media/submissions/photos/test_64FCfcR.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_64FCfcR.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_8onbqyR.gif b/shared/media/submissions/photos/test_8onbqyR.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_8onbqyR.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_EEMicNQ.gif b/shared/media/submissions/photos/test_EEMicNQ.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_EEMicNQ.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_Flfcskr.gif b/shared/media/submissions/photos/test_Flfcskr.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_Flfcskr.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_K1J4Y6j.gif b/shared/media/submissions/photos/test_K1J4Y6j.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_K1J4Y6j.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_K2WzNs7.gif b/shared/media/submissions/photos/test_K2WzNs7.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_K2WzNs7.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_KKd6dpZ.gif b/shared/media/submissions/photos/test_KKd6dpZ.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_KKd6dpZ.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_MCHwopu.gif b/shared/media/submissions/photos/test_MCHwopu.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_MCHwopu.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_NPodCpP.gif b/shared/media/submissions/photos/test_NPodCpP.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_NPodCpP.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_OxfsFfg.gif b/shared/media/submissions/photos/test_OxfsFfg.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_OxfsFfg.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_VU1MgKV.gif b/shared/media/submissions/photos/test_VU1MgKV.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_VU1MgKV.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_WqDR1Q8.gif b/shared/media/submissions/photos/test_WqDR1Q8.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_WqDR1Q8.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_dcFwQbe.gif b/shared/media/submissions/photos/test_dcFwQbe.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_dcFwQbe.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_iCwUGwe.gif b/shared/media/submissions/photos/test_iCwUGwe.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_iCwUGwe.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_kO7k8tD.gif b/shared/media/submissions/photos/test_kO7k8tD.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_kO7k8tD.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_nRXZBNF.gif b/shared/media/submissions/photos/test_nRXZBNF.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_nRXZBNF.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_rhLwdHb.gif b/shared/media/submissions/photos/test_rhLwdHb.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_rhLwdHb.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_vtYAbqq.gif b/shared/media/submissions/photos/test_vtYAbqq.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_vtYAbqq.gif and /dev/null differ diff --git a/shared/media/submissions/photos/test_wVQsthU.gif b/shared/media/submissions/photos/test_wVQsthU.gif deleted file mode 100644 index 0ad774e8..00000000 Binary files a/shared/media/submissions/photos/test_wVQsthU.gif and /dev/null differ diff --git a/shared/media/uploads/park/alton-towers/alton-towers_1.jpg b/shared/media/uploads/park/alton-towers/alton-towers_1.jpg deleted file mode 100644 index 26b135bb..00000000 Binary files a/shared/media/uploads/park/alton-towers/alton-towers_1.jpg and /dev/null differ diff --git a/shared/media/uploads/park/cedar-point/cedar-point_1.jpg b/shared/media/uploads/park/cedar-point/cedar-point_1.jpg deleted file mode 100644 index 746c342a..00000000 Binary files a/shared/media/uploads/park/cedar-point/cedar-point_1.jpg and /dev/null differ diff --git a/shared/media/uploads/park/europa-park/europa-park_1.jpg b/shared/media/uploads/park/europa-park/europa-park_1.jpg deleted file mode 100644 index 746c342a..00000000 Binary files a/shared/media/uploads/park/europa-park/europa-park_1.jpg and /dev/null differ diff --git a/shared/media/uploads/park/universals-islands-of-adventure/universals-islands-of-adventure_1.jpg b/shared/media/uploads/park/universals-islands-of-adventure/universals-islands-of-adventure_1.jpg deleted file mode 100644 index 75b5ec69..00000000 Binary files a/shared/media/uploads/park/universals-islands-of-adventure/universals-islands-of-adventure_1.jpg and /dev/null differ diff --git a/shared/media/uploads/park/walt-disney-world-magic-kingdom/walt-disney-world-magic-kingdom_1.jpg b/shared/media/uploads/park/walt-disney-world-magic-kingdom/walt-disney-world-magic-kingdom_1.jpg deleted file mode 100644 index d3e26686..00000000 Binary files a/shared/media/uploads/park/walt-disney-world-magic-kingdom/walt-disney-world-magic-kingdom_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/big-thunder-mountain-railroad/big-thunder-mountain-railroad_1.jpg b/shared/media/uploads/ride/big-thunder-mountain-railroad/big-thunder-mountain-railroad_1.jpg deleted file mode 100644 index 4f6f9881..00000000 Binary files a/shared/media/uploads/ride/big-thunder-mountain-railroad/big-thunder-mountain-railroad_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/blue-fire/blue-fire_1.jpg b/shared/media/uploads/ride/blue-fire/blue-fire_1.jpg deleted file mode 100644 index 4f6f9881..00000000 Binary files a/shared/media/uploads/ride/blue-fire/blue-fire_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/hagrids-magical-creatures-motorbike-adventure/hagrids-magical-creatures-motorbike-adventure_1.jpg b/shared/media/uploads/ride/hagrids-magical-creatures-motorbike-adventure/hagrids-magical-creatures-motorbike-adventure_1.jpg deleted file mode 100644 index 4f6f9881..00000000 Binary files a/shared/media/uploads/ride/hagrids-magical-creatures-motorbike-adventure/hagrids-magical-creatures-motorbike-adventure_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/haunted-mansion/haunted-mansion_1.jpg b/shared/media/uploads/ride/haunted-mansion/haunted-mansion_1.jpg deleted file mode 100644 index 75b5ec69..00000000 Binary files a/shared/media/uploads/ride/haunted-mansion/haunted-mansion_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/jurassic-world-velocicoaster/jurassic-world-velocicoaster_1.jpg b/shared/media/uploads/ride/jurassic-world-velocicoaster/jurassic-world-velocicoaster_1.jpg deleted file mode 100644 index 746c342a..00000000 Binary files a/shared/media/uploads/ride/jurassic-world-velocicoaster/jurassic-world-velocicoaster_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/maverick/maverick_1.jpg b/shared/media/uploads/ride/maverick/maverick_1.jpg deleted file mode 100644 index a2ffa77c..00000000 Binary files a/shared/media/uploads/ride/maverick/maverick_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/millennium-force/millennium-force_1.jpg b/shared/media/uploads/ride/millennium-force/millennium-force_1.jpg deleted file mode 100644 index affc9604..00000000 Binary files a/shared/media/uploads/ride/millennium-force/millennium-force_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/nemesis/nemesis_1.jpg b/shared/media/uploads/ride/nemesis/nemesis_1.jpg deleted file mode 100644 index 1f063457..00000000 Binary files a/shared/media/uploads/ride/nemesis/nemesis_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/oblivion/oblivion_1.jpg b/shared/media/uploads/ride/oblivion/oblivion_1.jpg deleted file mode 100644 index affc9604..00000000 Binary files a/shared/media/uploads/ride/oblivion/oblivion_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/pirates-of-the-caribbean/pirates-of-the-caribbean_1.jpg b/shared/media/uploads/ride/pirates-of-the-caribbean/pirates-of-the-caribbean_1.jpg deleted file mode 100644 index 26b135bb..00000000 Binary files a/shared/media/uploads/ride/pirates-of-the-caribbean/pirates-of-the-caribbean_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/seven-dwarfs-mine-train/seven-dwarfs-mine-train_1.jpg b/shared/media/uploads/ride/seven-dwarfs-mine-train/seven-dwarfs-mine-train_1.jpg deleted file mode 100644 index 0214ece4..00000000 Binary files a/shared/media/uploads/ride/seven-dwarfs-mine-train/seven-dwarfs-mine-train_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/silver-star/silver-star_1.jpg b/shared/media/uploads/ride/silver-star/silver-star_1.jpg deleted file mode 100644 index 746c342a..00000000 Binary files a/shared/media/uploads/ride/silver-star/silver-star_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/space-mountain/space-mountain_1.jpg b/shared/media/uploads/ride/space-mountain/space-mountain_1.jpg deleted file mode 100644 index 746c342a..00000000 Binary files a/shared/media/uploads/ride/space-mountain/space-mountain_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/steel-vengeance/steel-vengeance_1.jpg b/shared/media/uploads/ride/steel-vengeance/steel-vengeance_1.jpg deleted file mode 100644 index 1f063457..00000000 Binary files a/shared/media/uploads/ride/steel-vengeance/steel-vengeance_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/the-amazing-adventures-of-spider-man/the-amazing-adventures-of-spider-man_1.jpg b/shared/media/uploads/ride/the-amazing-adventures-of-spider-man/the-amazing-adventures-of-spider-man_1.jpg deleted file mode 100644 index 0214ece4..00000000 Binary files a/shared/media/uploads/ride/the-amazing-adventures-of-spider-man/the-amazing-adventures-of-spider-man_1.jpg and /dev/null differ diff --git a/shared/media/uploads/ride/top-thrill-dragster/top-thrill-dragster_1.jpg b/shared/media/uploads/ride/top-thrill-dragster/top-thrill-dragster_1.jpg deleted file mode 100644 index d1ecd015..00000000 Binary files a/shared/media/uploads/ride/top-thrill-dragster/top-thrill-dragster_1.jpg and /dev/null differ diff --git a/shared/scripts/README.md b/shared/scripts/README.md deleted file mode 100644 index ffb3abc3..00000000 --- a/shared/scripts/README.md +++ /dev/null @@ -1,94 +0,0 @@ -# ThrillWiki Development Scripts - -## Development Server Script - -The `dev_server.sh` script sets up all necessary environment variables and starts the Django development server with proper configuration. - -### Usage - -```bash -# From the project root directory -./scripts/dev_server.sh - -# Or from anywhere -/path/to/thrillwiki_django_no_react/scripts/dev_server.sh -``` - -### What the script does - -1. **Environment Setup**: Sets all required environment variables for local development -2. **Directory Creation**: Creates necessary directories (logs, profiles, media, etc.) -3. **Database Migrations**: Runs pending migrations automatically -4. **Superuser Creation**: Creates a development superuser (admin/admin) if none exists -5. **Static Files**: Collects static files for the application -6. **Tailwind CSS**: Builds Tailwind CSS if npm is available -7. **System Checks**: Runs Django system checks -8. **Server Start**: Starts the Django development server on `http://localhost:8000` - -### Environment Variables Set - -The script automatically sets these environment variables: - -- `DJANGO_SETTINGS_MODULE=config.django.local` -- `DEBUG=True` -- `SECRET_KEY=` -- `ALLOWED_HOSTS=localhost,127.0.0.1,0.0.0.0` -- `DATABASE_URL=postgis://thrillwiki_user:thrillwiki_pass@localhost:5432/thrillwiki_db` -- `CACHE_URL=locmemcache://` -- `CORS_ALLOW_ALL_ORIGINS=True` -- GeoDjango library paths for macOS -- And many more... - -### Prerequisites - -1. **PostgreSQL with PostGIS**: Make sure PostgreSQL with PostGIS extension is running -2. **Database**: Create the database `thrillwiki_db` with user `thrillwiki_user` -3. **uv**: The script uses `uv` to run Django commands -4. **Virtual Environment**: The script will activate `.venv` if it exists - -### Database Setup - -If you need to set up the database: - -```bash -# Install PostgreSQL and PostGIS (macOS with Homebrew) -brew install postgresql postgis - -# Start PostgreSQL -brew services start postgresql - -# Create database and user -psql postgres -c "CREATE USER thrillwiki_user WITH PASSWORD 'thrillwiki_pass';" -psql postgres -c "CREATE DATABASE thrillwiki_db OWNER thrillwiki_user;" -psql -d thrillwiki_db -c "CREATE EXTENSION postgis;" -psql -d thrillwiki_db -c "GRANT ALL PRIVILEGES ON DATABASE thrillwiki_db TO thrillwiki_user;" -``` - -### Access Points - -Once the server is running, you can access: - -- **Main Application**: http://localhost:8000 -- **Admin Interface**: http://localhost:8000/admin/ (admin/admin) -- **Django Silk Profiler**: http://localhost:8000/silk/ -- **API Documentation**: http://localhost:8000/api/docs/ -- **API Redoc**: http://localhost:8000/api/redoc/ - -### Stopping the Server - -Press `Ctrl+C` to stop the development server. - -### Troubleshooting - -1. **Database Connection Issues**: Ensure PostgreSQL is running and the database exists -2. **GeoDjango Library Issues**: Adjust `GDAL_LIBRARY_PATH` and `GEOS_LIBRARY_PATH` if needed -3. **Permission Issues**: Make sure the script is executable with `chmod +x scripts/dev_server.sh` -4. **Virtual Environment**: Ensure your virtual environment is set up with all dependencies - -### Customization - -You can modify the script to: -- Change default database credentials -- Adjust library paths for your system -- Add additional environment variables -- Modify the development server port or host diff --git a/shared/scripts/backups/config/.github-pat.20250818_210101.backup b/shared/scripts/backups/config/.github-pat.20250818_210101.backup deleted file mode 100644 index 630c5d5e..00000000 --- a/shared/scripts/backups/config/.github-pat.20250818_210101.backup +++ /dev/null @@ -1 +0,0 @@ -[GITHUB-TOKEN-REMOVED] \ No newline at end of file diff --git a/shared/scripts/backups/config/thrillwiki-automation.env.20250818_210101.backup b/shared/scripts/backups/config/thrillwiki-automation.env.20250818_210101.backup deleted file mode 100644 index c06fa181..00000000 --- a/shared/scripts/backups/config/thrillwiki-automation.env.20250818_210101.backup +++ /dev/null @@ -1,203 +0,0 @@ -# ThrillWiki Automation Service Environment Configuration -# Copy this file to thrillwiki-automation***REMOVED*** and customize for your environment -# -# Security Note: This file should have restricted permissions (600) as it may contain -# sensitive information like GitHub Personal Access Tokens - -# [AWS-SECRET-REMOVED]==================================== -# PROJECT CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# Base project directory (usually auto-detected) -# PROJECT_DIR=/home/ubuntu/thrillwiki - -# Service name for systemd integration -# SERVICE_NAME=thrillwiki - -# [AWS-SECRET-REMOVED]==================================== -# GITHUB REPOSITORY CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# GitHub repository remote name -# GITHUB_REPO=origin - -# Branch to pull from -# GITHUB_BRANCH=main - -# GitHub Personal Access Token (PAT) - Required for private repositories -# Generate at: https://github.com/settings/tokens -# Required permissions: repo (Full control of private repositories) -# GITHUB_TOKEN=ghp_your_personal_access_token_here - -# GitHub token file location (alternative to GITHUB_TOKEN) -# GITHUB_TOKEN_FILE=/home/ubuntu/thrillwiki/.github-pat - -# [AWS-SECRET-REMOVED]==================================== -# AUTOMATION TIMING CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# Repository pull interval in seconds (default: 300 = 5 minutes) -# PULL_INTERVAL=300 - -# Health check interval in seconds (default: 60 = 1 minute) -# HEALTH_CHECK_INTERVAL=60 - -# Server startup timeout in seconds (default: 120 = 2 minutes) -# STARTUP_TIMEOUT=120 - -# Restart delay after failure in seconds (default: 10) -# RESTART_DELAY=10 - -# [AWS-SECRET-REMOVED]==================================== -# LOGGING CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# Log directory (default: project_dir/logs) -# LOG_DIR=/home/ubuntu/thrillwiki/logs - -# Log file path -# LOG_[AWS-SECRET-REMOVED]proof-automation.log - -# Maximum log file size in bytes (default: 10485760 = 10MB) -# MAX_LOG_SIZE=10485760 - -# Lock file location to prevent multiple instances -# LOCK_FILE=/tmp/thrillwiki-bulletproof.lock - -# [AWS-SECRET-REMOVED]==================================== -# DEVELOPMENT SERVER CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# Server host address (default: 0.0.0.0 for all interfaces) -# SERVER_HOST=0.0.0.0 - -# Server port (default: 8000) -# SERVER_PORT=8000 - -# [AWS-SECRET-REMOVED]==================================== -# DJANGO CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# Django settings module -# DJANGO_SETTINGS_MODULE=thrillwiki.settings - -# Python path -# PYTHONPATH=/home/ubuntu/thrillwiki - -# [AWS-SECRET-REMOVED]==================================== -# ADVANCED CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# GitHub authentication script location -# GITHUB_AUTH_[AWS-SECRET-REMOVED]ithub-auth.py - -# Enable verbose logging (true/false) -# VERBOSE_LOGGING=false - -# Enable debug mode for troubleshooting (true/false) -# DEBUG_MODE=false - -# Custom git remote URL (overrides GITHUB_REPO if set) -# CUSTOM_GIT_REMOTE=https://github.com/username/repository.git - -# Email notifications for critical failures (requires email configuration) -# NOTIFICATION_EMAIL=admin@example.com - -# Maximum consecutive failures before alerting (default: 5) -# MAX_CONSECUTIVE_FAILURES=5 - -# Enable automatic dependency updates (true/false, default: true) -# AUTO_UPDATE_DEPENDENCIES=true - -# Enable automatic migrations on code changes (true/false, default: true) -# AUTO_MIGRATE=true - -# Enable automatic static file collection (true/false, default: true) -# AUTO_COLLECTSTATIC=true - -# [AWS-SECRET-REMOVED]==================================== -# SECURITY CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# GitHub authentication method (token|ssh|https) -# Default: token (uses GITHUB_TOKEN or GITHUB_TOKEN_FILE) -# GITHUB_AUTH_METHOD=token - -# SSH key path for git operations (when using ssh auth method) -# SSH_KEY_PATH=/home/ubuntu/.ssh/***REMOVED*** - -# Git user configuration for commits -# GIT_USER_NAME="ThrillWiki Automation" -# GIT_USER_EMAIL="automation@thrillwiki.local" - -# [AWS-SECRET-REMOVED]==================================== -# MONITORING AND HEALTH CHECKS -# [AWS-SECRET-REMOVED]==================================== - -# Health check URL to verify server is running -# HEALTH_CHECK_URL=http://localhost:8000/health/ - -# Health check timeout in seconds -# HEALTH_CHECK_TIMEOUT=30 - -# Enable system resource monitoring (true/false) -# MONITOR_RESOURCES=true - -# Memory usage threshold for warnings (in MB) -# MEMORY_WARNING_THRESHOLD=1024 - -# CPU usage threshold for warnings (percentage) -# CPU_WARNING_THRESHOLD=80 - -# Disk usage threshold for warnings (percentage) -# DISK_WARNING_THRESHOLD=90 - -# [AWS-SECRET-REMOVED]==================================== -# INTEGRATION SETTINGS -# [AWS-SECRET-REMOVED]==================================== - -# Webhook integration (if using thrillwiki-webhook service) -# WEBHOOK_INTEGRATION=true - -# Slack webhook URL for notifications (optional) -# SLACK_WEBHOOK_URL=https://hooks.slack.com/services/your/webhook/url - -# Discord webhook URL for notifications (optional) -# DISCORD_WEBHOOK_URL=https://discord.com/api/webhooks/your/webhook/url - -# [AWS-SECRET-REMOVED]==================================== -# USAGE EXAMPLES -# [AWS-SECRET-REMOVED]==================================== - -# Example 1: Basic setup with GitHub PAT -# GITHUB_TOKEN=ghp_your_token_here -# PULL_INTERVAL=300 -# AUTO_MIGRATE=true - -# Example 2: Enhanced monitoring setup -# HEALTH_CHECK_INTERVAL=30 -# MONITOR_RESOURCES=true -# NOTIFICATION_EMAIL=admin@thrillwiki.com -# SLACK_WEBHOOK_URL=https://hooks.slack.com/services/your/webhook - -# Example 3: Development environment with frequent pulls -# PULL_INTERVAL=60 -# DEBUG_MODE=true -# VERBOSE_LOGGING=true -# AUTO_UPDATE_DEPENDENCIES=true - -# [AWS-SECRET-REMOVED]==================================== -# INSTALLATION NOTES -# [AWS-SECRET-REMOVED]==================================== - -# 1. Copy this file: cp thrillwiki-automation***REMOVED***.example thrillwiki-automation***REMOVED*** -# 2. Set secure permissions: chmod 600 thrillwiki-automation***REMOVED*** -# 3. Customize the settings above for your environment -# 4. Enable the service: sudo systemctl enable thrillwiki-automation -# 5. Start the service: sudo systemctl start thrillwiki-automation -# 6. Check status: sudo systemctl status thrillwiki-automation -# 7. View logs: sudo journalctl -u thrillwiki-automation -f - -# For security, ensure only the ubuntu user can read this file: -# sudo chown ubuntu:ubuntu thrillwiki-automation***REMOVED*** -# sudo chmod 600 thrillwiki-automation***REMOVED*** \ No newline at end of file diff --git a/shared/scripts/ci-start.sh b/shared/scripts/ci-start.sh deleted file mode 100755 index fcd33664..00000000 --- a/shared/scripts/ci-start.sh +++ /dev/null @@ -1,129 +0,0 @@ -#!/bin/bash - -# ThrillWiki Local CI Start Script -# This script starts the Django development server following project requirements - -set -e # Exit on any error - -# Configuration -PROJECT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)" -LOG_DIR="$PROJECT_DIR/logs" -PID_FILE="$LOG_DIR/django.pid" -LOG_FILE="$LOG_DIR/django.log" - -# Colors for output -RED='\033[0;31m' -GREEN='\033[0;32m' -YELLOW='\033[1;33m' -BLUE='\033[0;34m' -NC='\033[0m' # No Color - -# Logging function -log() { - echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1" -} - -log_success() { - echo -e "${GREEN}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1" -} - -log_warning() { - echo -e "${YELLOW}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1" -} - -log_error() { - echo -e "${RED}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1" -} - -# Create logs directory if it doesn't exist -mkdir -p "$LOG_DIR" - -# Change to project directory -cd "$PROJECT_DIR" - -log "Starting ThrillWiki CI deployment..." - -# Check if UV is installed -if ! command -v uv &> /dev/null; then - log_error "UV is not installed. Please install UV first." - exit 1 -fi - -# Stop any existing Django processes on port 8000 -log "Stopping any existing Django processes on port 8000..." -if lsof -ti :8000 >/dev/null 2>&1; then - lsof -ti :8000 | xargs kill -9 2>/dev/null || true - log_success "Stopped existing processes" -else - log "No existing processes found on port 8000" -fi - -# Clean up Python cache files -log "Cleaning up Python cache files..." -find . -type d -name "__pycache__" -exec rm -r {} + 2>/dev/null || true -log_success "Cache files cleaned" - -# Install/update dependencies -log "Installing/updating dependencies with UV..." -uv sync --no-dev || { - log_error "Failed to sync dependencies" - exit 1 -} - -# Run database migrations -log "Running database migrations..." -uv run manage.py migrate || { - log_error "Database migrations failed" - exit 1 -} - -# Collect static files -log "Collecting static files..." -uv run manage.py collectstatic --noinput || { - log_warning "Static file collection failed, continuing anyway" -} - -# Start the development server -log "Starting Django development server with Tailwind..." -log "Server will be available at: http://localhost:8000" -log "Press Ctrl+C to stop the server" - -# Start server and capture PID -uv run manage.py tailwind runserver 0.0.0.0:8000 & -SERVER_PID=$! - -# Save PID to file -echo $SERVER_PID > "$PID_FILE" - -log_success "Django server started with PID: $SERVER_PID" -log "Server logs are being written to: $LOG_FILE" - -# Wait for server to start -sleep 3 - -# Check if server is running -if kill -0 $SERVER_PID 2>/dev/null; then - log_success "Server is running successfully!" - - # Monitor the process - wait $SERVER_PID -else - log_error "Server failed to start" - rm -f "$PID_FILE" - exit 1 -fi - -# Cleanup on exit -cleanup() { - log "Shutting down server..." - if [ -f "$PID_FILE" ]; then - PID=$(cat "$PID_FILE") - if kill -0 $PID 2>/dev/null; then - kill $PID - log_success "Server stopped" - fi - rm -f "$PID_FILE" - fi -} - -trap cleanup EXIT INT TERM \ No newline at end of file diff --git a/shared/scripts/create_initial_data.py b/shared/scripts/create_initial_data.py deleted file mode 100644 index a93d6f85..00000000 --- a/shared/scripts/create_initial_data.py +++ /dev/null @@ -1,108 +0,0 @@ -from django.utils import timezone -from parks.models import Park, ParkLocation -from rides.models import Ride, RideModel, RollerCoasterStats -from rides.models import Manufacturer - -# Create Cedar Point -park, _ = Park.objects.get_or_create( - name="Cedar Point", - slug="cedar-point", - defaults={ - "description": ( - "Cedar Point is a 364-acre amusement park located on a Lake Erie " - "peninsula in Sandusky, Ohio." - ), - "website": "https://www.cedarpoint.com", - "size_acres": 364, - "opening_date": timezone.datetime( - 1870, 1, 1 - ).date(), # Cedar Point opened in 1870 - }, -) - -# Create location for Cedar Point -location, _ = ParkLocation.objects.get_or_create( - park=park, - defaults={ - "street_address": "1 Cedar Point Dr", - "city": "Sandusky", - "state": "OH", - "postal_code": "44870", - "country": "USA", - }, -) -# Set coordinates using the helper method -location.set_coordinates(-82.6839, 41.4822) # longitude, latitude -location.save() - -# Create Intamin as manufacturer -bm, _ = Manufacturer.objects.get_or_create( - name="Intamin", - slug="intamin", - defaults={ - "description": ( - "Intamin Amusement Rides is a design company known for creating " - "some of the most thrilling and innovative roller coasters in the world." - ), - "website": "https://www.intaminworldwide.com", - }, -) - -# Create Giga Coaster model -giga_model, _ = RideModel.objects.get_or_create( - name="Giga Coaster", - manufacturer=bm, - defaults={ - "description": ( - "A roller coaster type characterized by a height between 300–399 feet " - "and a complete circuit." - ), - "category": "RC", # Roller Coaster - }, -) - -# Create Millennium Force -millennium, _ = Ride.objects.get_or_create( - name="Millennium Force", - slug="millennium-force", - defaults={ - "description": ( - "Millennium Force is a steel roller coaster located at Cedar Point " - "amusement park in Sandusky, Ohio. It was built by Intamin of " - "Switzerland and opened on May 13, 2000 as the world's first giga " - "coaster, a class of roller coasters having a height between 300 " - "and 399 feet and a complete circuit." - ), - "park": park, - "category": "RC", - "manufacturer": bm, - "ride_model": giga_model, - "status": "OPERATING", - "opening_date": timezone.datetime(2000, 5, 13).date(), - "min_height_in": 48, # 48 inches minimum height - "capacity_per_hour": 1300, - "ride_duration_seconds": 120, # 2 minutes - }, -) - -# Create stats for Millennium Force -RollerCoasterStats.objects.get_or_create( - ride=millennium, - defaults={ - "height_ft": 310, - "length_ft": 6595, - "speed_mph": 93, - "inversions": 0, - "ride_time_seconds": 120, - "track_material": "STEEL", - "roller_coaster_type": "SITDOWN", - "max_drop_height_ft": 300, - "launch_type": "CHAIN", - "train_style": "Open-air stadium seating", - "trains_count": 3, - "cars_per_train": 9, - "seats_per_car": 4, - }, -) - -print("Initial data created successfully!") diff --git a/shared/scripts/deploy/.gitkeep b/shared/scripts/deploy/.gitkeep deleted file mode 100644 index e69de29b..00000000 diff --git a/shared/scripts/deploy/deploy.sh b/shared/scripts/deploy/deploy.sh deleted file mode 100755 index d1c13cd8..00000000 --- a/shared/scripts/deploy/deploy.sh +++ /dev/null @@ -1,494 +0,0 @@ -#!/bin/bash - -# ThrillWiki Deployment Script -# Deploys the application to various environments - -set -e - -# Colors for output -RED='\033[0;31m' -GREEN='\033[0;32m' -YELLOW='\033[1;33m' -BLUE='\033[0;34m' -NC='\033[0m' # No Color - -# Script directory and project root -SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -PROJECT_ROOT="$(cd "$SCRIPT_DIR/../../../" && pwd)" - -# Configuration -DEPLOY_ENV="production" -DEPLOY_DIR="$PROJECT_ROOT/deploy" -BACKUP_DIR="$PROJECT_ROOT/backups" -TIMESTAMP=$(date +"%Y%m%d_%H%M%S") - -# Function to print colored output -print_status() { - echo -e "${BLUE}[INFO]${NC} $1" -} - -print_success() { - echo -e "${GREEN}[SUCCESS]${NC} $1" -} - -print_warning() { - echo -e "${YELLOW}[WARNING]${NC} $1" -} - -print_error() { - echo -e "${RED}[ERROR]${NC} $1" -} - -# Function to check if a command exists -command_exists() { - command -v "$1" >/dev/null 2>&1 -} - -# Function to check deployment requirements -check_deployment_requirements() { - print_status "Checking deployment requirements..." - - local missing_deps=() - - # Check if deployment artifacts exist - if [ ! -d "$DEPLOY_DIR" ]; then - missing_deps+=("deployment_artifacts") - fi - - if [ ! -f "$DEPLOY_DIR/manifest.json" ]; then - missing_deps+=("deployment_manifest") - fi - - # Check for deployment tools - if [ "$DEPLOY_METHOD" = "docker" ]; then - if ! command_exists docker; then - missing_deps+=("docker") - fi - fi - - if [ "$DEPLOY_METHOD" = "rsync" ]; then - if ! command_exists rsync; then - missing_deps+=("rsync") - fi - fi - - if [ ${#missing_deps[@]} -ne 0 ]; then - print_error "Missing deployment requirements: ${missing_deps[*]}" - exit 1 - fi - - print_success "Deployment requirements met!" -} - -# Function to create backup -create_backup() { - print_status "Creating backup before deployment..." - - mkdir -p "$BACKUP_DIR" - - local backup_path="$BACKUP_DIR/backup_$TIMESTAMP" - - # Create backup directory - mkdir -p "$backup_path" - - # Backup current deployment if it exists - if [ -d "$DEPLOY_TARGET" ]; then - print_status "Backing up current deployment..." - cp -r "$DEPLOY_TARGET" "$backup_path/current" - fi - - # Backup database if requested - if [ "$BACKUP_DATABASE" = true ]; then - print_status "Backing up database..." - # This would depend on your database setup - # For SQLite: - if [ -f "$PROJECT_ROOT/backend/db.sqlite3" ]; then - cp "$PROJECT_ROOT/backend/db.sqlite3" "$backup_path/database.sqlite3" - fi - fi - - # Backup environment files - if [ -f "$PROJECT_ROOT/.env" ]; then - cp "$PROJECT_ROOT/.env" "$backup_path/.env.backup" - fi - - print_success "Backup created: $backup_path" -} - -# Function to prepare deployment artifacts -prepare_artifacts() { - print_status "Preparing deployment artifacts..." - - # Check if build artifacts exist - if [ ! -d "$DEPLOY_DIR" ]; then - print_error "No deployment artifacts found. Please run build-all.sh first." - exit 1 - fi - - # Validate manifest - if [ -f "$DEPLOY_DIR/manifest.json" ]; then - print_status "Validating deployment manifest..." - # You could add more validation here - cat "$DEPLOY_DIR/manifest.json" | grep -q "build_timestamp" || { - print_error "Invalid deployment manifest" - exit 1 - } - fi - - print_success "Deployment artifacts ready!" -} - -# Function to deploy to local development -deploy_local() { - print_status "Deploying to local development environment..." - - local target_dir="$PROJECT_ROOT/deployment" - - # Create target directory - mkdir -p "$target_dir" - - # Copy artifacts - print_status "Copying frontend artifacts..." - cp -r "$DEPLOY_DIR/frontend" "$target_dir/" - - print_status "Copying backend artifacts..." - mkdir -p "$target_dir/backend" - cp -r "$DEPLOY_DIR/backend/staticfiles" "$target_dir/backend/" - - # Copy deployment configuration - cp "$DEPLOY_DIR/manifest.json" "$target_dir/" - - print_success "Local deployment completed!" - print_status "Deployment available at: $target_dir" -} - -# Function to deploy via rsync -deploy_rsync() { - print_status "Deploying via rsync..." - - if [ -z "$DEPLOY_HOST" ]; then - print_error "DEPLOY_HOST not set for rsync deployment" - exit 1 - fi - - local target="" - - if [ -n "$DEPLOY_USER" ]; then - target="$DEPLOY_USER@$DEPLOY_HOST:$DEPLOY_PATH" - else - target="$DEPLOY_HOST:$DEPLOY_PATH" - fi - - print_status "Syncing files to $target..." - - # Rsync options: - # -a: archive mode (recursive, preserves attributes) - # -v: verbose - # -z: compress during transfer - # --delete: delete files not in source - # --exclude: exclude certain files - rsync -avz --delete \ - --exclude='.git' \ - --exclude='node_modules' \ - --exclude='__pycache__' \ - --exclude='*.log' \ - "$DEPLOY_DIR/" "$target" - - print_success "Rsync deployment completed!" -} - -# Function to deploy via Docker -deploy_docker() { - print_status "Deploying via Docker..." - - local image_name="thrillwiki-$DEPLOY_ENV" - local container_name="thrillwiki-$DEPLOY_ENV" - - # Build Docker image - print_status "Building Docker image: $image_name" - docker build -t "$image_name" \ - --build-arg DEPLOY_ENV="$DEPLOY_ENV" \ - -f "$PROJECT_ROOT/Dockerfile" \ - "$PROJECT_ROOT" - - # Stop existing container - if docker ps -q -f name="$container_name" | grep -q .; then - print_status "Stopping existing container..." - docker stop "$container_name" - fi - - # Remove existing container - if docker ps -a -q -f name="$container_name" | grep -q .; then - print_status "Removing existing container..." - docker rm "$container_name" - fi - - # Run new container - print_status "Starting new container..." - docker run -d \ - --name "$container_name" \ - -p 8080:80 \ - -e DEPLOY_ENV="$DEPLOY_ENV" \ - "$image_name" - - print_success "Docker deployment completed!" - print_status "Container: $container_name" - print_status "URL: http://localhost:8080" -} - -# Function to run post-deployment checks -run_post_deploy_checks() { - print_status "Running post-deployment checks..." - - local health_url="" - - case $DEPLOY_METHOD in - "local") - health_url="http://localhost:8080/health" - ;; - "docker") - health_url="http://localhost:8080/health" - ;; - "rsync") - if [ -n "$DEPLOY_HOST" ]; then - health_url="http://$DEPLOY_HOST/health" - fi - ;; - esac - - if [ -n "$health_url" ]; then - print_status "Checking health endpoint: $health_url" - if curl -s -f "$health_url" > /dev/null 2>&1; then - print_success "Health check passed!" - else - print_warning "Health check failed. Please verify deployment." - fi - fi - - print_success "Post-deployment checks completed!" -} - -# Function to generate deployment report -generate_deployment_report() { - print_status "Generating deployment report..." - - local report_file="$PROJECT_ROOT/deployment-report-$DEPLOY_ENV-$TIMESTAMP.txt" - - cat > "$report_file" << EOF -ThrillWiki Deployment Report -============================ - -Deployment Information: -- Deployment Date: $(date) -- Environment: $DEPLOY_ENV -- Method: $DEPLOY_METHOD -- Project Root: $PROJECT_ROOT - -Deployment Details: -- Source Directory: $DEPLOY_DIR -- Target: $DEPLOY_TARGET -- Backup Created: $([ "$CREATE_BACKUP" = true ] && echo "Yes" || echo "No") - -Build Information: -$(if [ -f "$DEPLOY_DIR/manifest.json" ]; then - cat "$DEPLOY_DIR/manifest.json" -else - echo "No manifest found" -fi) - -System Information: -- Hostname: $(hostname) -- User: $(whoami) -- OS: $(uname -s) $(uname -r) - -Deployment Status: SUCCESS - -Post-Deployment: -- Health Check: $([ "$RUN_CHECKS" = true ] && echo "Run" || echo "Skipped") -- Backup Location: $([ "$CREATE_BACKUP" = true ] && echo "$BACKUP_DIR/backup_$TIMESTAMP" || echo "None") - -EOF - - print_success "Deployment report generated: $report_file" -} - -# Function to show usage -show_usage() { - cat << EOF -Usage: $0 [ENVIRONMENT] [OPTIONS] - -Deploy ThrillWiki to the specified environment. - -Environments: - dev Development environment - staging Staging environment - production Production environment - -Options: - -h, --help Show this help message - -m, --method METHOD Deployment method (local, rsync, docker) - --no-backup Skip backup creation - --no-checks Skip post-deployment checks - --no-report Skip deployment report generation - -Examples: - $0 production # Deploy to production using default method - $0 staging --method docker # Deploy to staging using Docker - $0 dev --no-backup # Deploy to dev without backup - -Environment Variables: - DEPLOY_METHOD Deployment method (default: local) - DEPLOY_HOST Target host for rsync deployment - DEPLOY_USER SSH user for rsync deployment - DEPLOY_PATH Target path for rsync deployment - CREATE_BACKUP Create backup before deployment (default: true) - BACKUP_DATABASE Backup database (default: false) - -EOF -} - -# Parse command line arguments -DEPLOY_METHOD="local" -CREATE_BACKUP=true -RUN_CHECKS=true -SKIP_REPORT=false - -# Get environment from first argument -if [ $# -gt 0 ]; then - case $1 in - dev|staging|production) - DEPLOY_ENV="$1" - shift - ;; - -h|--help) - show_usage - exit 0 - ;; - *) - print_error "Invalid environment: $1" - show_usage - exit 1 - ;; - esac -fi - -# Parse remaining arguments -while [[ $# -gt 0 ]]; do - case $1 in - -h|--help) - show_usage - exit 0 - ;; - -m|--method) - DEPLOY_METHOD="$2" - shift 2 - ;; - --no-backup) - CREATE_BACKUP=false - shift - ;; - --no-checks) - RUN_CHECKS=false - shift - ;; - --no-report) - SKIP_REPORT=true - shift - ;; - *) - print_error "Unknown option: $1" - show_usage - exit 1 - ;; - esac -done - -# Override from environment variables -if [ ! -z "$DEPLOY_METHOD_ENV" ]; then - DEPLOY_METHOD=$DEPLOY_METHOD_ENV -fi - -if [ "$CREATE_BACKUP_ENV" = "false" ]; then - CREATE_BACKUP=false -fi - -# Set deployment target based on method -case $DEPLOY_METHOD in - "local") - DEPLOY_TARGET="$PROJECT_ROOT/deployment" - ;; - "rsync") - DEPLOY_TARGET="${DEPLOY_USER:-}$(if [ -n "$DEPLOY_USER" ]; then echo "@"; fi)${DEPLOY_HOST:-localhost}:${DEPLOY_PATH:-/var/www/thrillwiki}" - ;; - "docker") - DEPLOY_TARGET="docker_container" - ;; - *) - print_error "Unsupported deployment method: $DEPLOY_METHOD" - exit 1 - ;; -esac - -# Print banner -echo -e "${GREEN}" -echo "==========================================" -echo " ThrillWiki Deployment" -echo "==========================================" -echo -e "${NC}" - -print_status "Environment: $DEPLOY_ENV" -print_status "Method: $DEPLOY_METHOD" -print_status "Target: $DEPLOY_TARGET" -print_status "Create backup: $CREATE_BACKUP" - -# Check deployment requirements -check_deployment_requirements - -# Prepare deployment artifacts -prepare_artifacts - -# Create backup if requested -if [ "$CREATE_BACKUP" = true ]; then - create_backup -else - print_warning "Skipping backup creation as requested" -fi - -# Deploy based on method -case $DEPLOY_METHOD in - "local") - deploy_local - ;; - "rsync") - deploy_rsync - ;; - "docker") - deploy_docker - ;; - *) - print_error "Unsupported deployment method: $DEPLOY_METHOD" - exit 1 - ;; -esac - -# Run post-deployment checks -if [ "$RUN_CHECKS" = true ]; then - run_post_deploy_checks -else - print_warning "Skipping post-deployment checks as requested" -fi - -# Generate deployment report -if [ "$SKIP_REPORT" = false ]; then - generate_deployment_report -else - print_warning "Skipping deployment report generation as requested" -fi - -print_success "Deployment completed successfully!" -echo "" -print_status "Environment: $DEPLOY_ENV" -print_status "Method: $DEPLOY_METHOD" -print_status "Target: $DEPLOY_TARGET" -echo "" -print_status "Deployment report: $PROJECT_ROOT/deployment-report-$DEPLOY_ENV-$TIMESTAMP.txt" \ No newline at end of file diff --git a/shared/scripts/dev/.gitkeep b/shared/scripts/dev/.gitkeep deleted file mode 100644 index e69de29b..00000000 diff --git a/shared/scripts/dev/setup-dev.sh b/shared/scripts/dev/setup-dev.sh deleted file mode 100755 index 1bec67b0..00000000 --- a/shared/scripts/dev/setup-dev.sh +++ /dev/null @@ -1,368 +0,0 @@ -#!/bin/bash - -# ThrillWiki Development Environment Setup -# Sets up the complete development environment for both backend and frontend - -set -e - -# Colors for output -RED='\033[0;31m' -GREEN='\033[0;32m' -YELLOW='\033[1;33m' -BLUE='\033[0;34m' -NC='\033[0m' # No Color - -# Script directory and project root -SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -PROJECT_ROOT="$(cd "$SCRIPT_DIR/../../../" && pwd)" - -# Configuration -BACKEND_DIR="$PROJECT_ROOT/backend" -FRONTEND_DIR="$PROJECT_ROOT/frontend" - -# Function to print colored output -print_status() { - echo -e "${BLUE}[INFO]${NC} $1" -} - -print_success() { - echo -e "${GREEN}[SUCCESS]${NC} $1" -} - -print_warning() { - echo -e "${YELLOW}[WARNING]${NC} $1" -} - -print_error() { - echo -e "${RED}[ERROR]${NC} $1" -} - -# Function to check if a command exists -command_exists() { - command -v "$1" >/dev/null 2>&1 -} - -# Function to check system requirements -check_requirements() { - print_status "Checking system requirements..." - - local missing_deps=() - - # Check Python - if ! command_exists python3; then - missing_deps+=("python3") - else - local python_version=$(python3 --version | cut -d' ' -f2 | cut -d'.' -f1,2) - if (( $(echo "$python_version < 3.11" | bc -l) )); then - print_warning "Python version $python_version detected. Python 3.11+ recommended." - fi - fi - - # Check uv - if ! command_exists uv; then - missing_deps+=("uv") - fi - - # Check Node.js - if ! command_exists node; then - missing_deps+=("node") - else - local node_version=$(node --version | cut -d'v' -f2 | cut -d'.' -f1) - if (( node_version < 18 )); then - print_warning "Node.js version $node_version detected. Node.js 18+ recommended." - fi - fi - - # Check pnpm - if ! command_exists pnpm; then - missing_deps+=("pnpm") - fi - - # Check PostgreSQL (optional) - if ! command_exists psql; then - print_warning "PostgreSQL not found. SQLite will be used for development." - fi - - # Check Redis (optional) - if ! command_exists redis-server; then - print_warning "Redis not found. Some features may not work." - fi - - if [ ${#missing_deps[@]} -ne 0 ]; then - print_error "Missing required dependencies: ${missing_deps[*]}" - print_status "Please install the missing dependencies and run this script again." - print_status "Installation instructions:" - print_status " - Python 3.11+: https://www.python.org/downloads/" - print_status " - uv: pip install uv" - print_status " - Node.js 18+: https://nodejs.org/" - print_status " - pnpm: npm install -g pnpm" - exit 1 - fi - - print_success "All system requirements met!" -} - -# Function to setup backend -setup_backend() { - print_status "Setting up Django backend..." - - cd "$BACKEND_DIR" - - # Install Python dependencies with uv - print_status "Installing Python dependencies..." - if [ ! -d ".venv" ]; then - uv sync - else - print_warning "Virtual environment already exists. Updating dependencies..." - uv sync - fi - - # Create .env file if it doesn't exist - if [ ! -f ".env" ]; then - print_status "Creating backend .env file..." - cp .env.example .env - print_warning "Please edit backend/.env with your settings" - else - print_warning "Backend .env file already exists" - fi - - # Run database migrations - print_status "Running database migrations..." - uv run manage.py migrate - - # Create superuser (optional) - print_status "Creating Django superuser..." - echo "from django.contrib.auth import get_user_model; User = get_user_model(); User.objects.filter(username='admin').exists() or User.objects.create_superuser('admin', 'admin@example.com', 'admin')" | uv run manage.py shell - - print_success "Backend setup completed!" - cd "$PROJECT_ROOT" -} - -# Function to setup frontend -setup_frontend() { - print_status "Setting up Vue.js frontend..." - - cd "$FRONTEND_DIR" - - # Install Node.js dependencies - print_status "Installing Node.js dependencies..." - if [ ! -d "node_modules" ]; then - pnpm install - else - print_warning "node_modules already exists. Updating dependencies..." - pnpm install - fi - - # Create environment files if they don't exist - if [ ! -f ".env.local" ]; then - print_status "Creating frontend .env.local file..." - cp .env.development .env.local - print_warning "Please edit frontend/.env.local with your settings" - else - print_warning "Frontend .env.local file already exists" - fi - - print_success "Frontend setup completed!" - cd "$PROJECT_ROOT" -} - -# Function to setup root environment -setup_root_env() { - print_status "Setting up root environment..." - - cd "$PROJECT_ROOT" - - # Create root .env file if it doesn't exist - if [ ! -f ".env" ]; then - print_status "Creating root .env file..." - cp .env.example .env - print_warning "Please edit .env with your settings" - else - print_warning "Root .env file already exists" - fi - - print_success "Root environment setup completed!" -} - -# Function to verify setup -verify_setup() { - print_status "Verifying setup..." - - local issues=() - - # Check backend - cd "$BACKEND_DIR" - if [ ! -d ".venv" ]; then - issues+=("Backend virtual environment not found") - fi - - if [ ! -f ".env" ]; then - issues+=("Backend .env file not found") - fi - - # Check if Django can start - if ! uv run manage.py check --settings=config.django.local >/dev/null 2>&1; then - issues+=("Django configuration check failed") - fi - - cd "$FRONTEND_DIR" - - # Check frontend - if [ ! -d "node_modules" ]; then - issues+=("Frontend node_modules not found") - fi - - if [ ! -f ".env.local" ]; then - issues+=("Frontend .env.local file not found") - fi - - # Check if Vue can build - if ! pnpm run type-check >/dev/null 2>&1; then - issues+=("Vue.js type check failed") - fi - - cd "$PROJECT_ROOT" - - if [ ${#issues[@]} -ne 0 ]; then - print_warning "Setup verification found issues:" - for issue in "${issues[@]}"; do - echo -e " - ${YELLOW}$issue${NC}" - done - return 1 - else - print_success "Setup verification passed!" - return 0 - fi -} - -# Function to show usage -show_usage() { - cat << EOF -Usage: $0 [OPTIONS] - -Set up the complete ThrillWiki development environment. - -Options: - -h, --help Show this help message - -b, --backend-only Setup only the backend - -f, --frontend-only Setup only the frontend - -y, --yes Skip confirmation prompts - --no-verify Skip setup verification - -Examples: - $0 # Setup both backend and frontend - $0 --backend-only # Setup only backend - $0 --frontend-only # Setup only frontend - -Environment Variables: - SKIP_CONFIRMATION Set to 'true' to skip confirmation prompts - SKIP_VERIFICATION Set to 'true' to skip verification - -EOF -} - -# Parse command line arguments -BACKEND_ONLY=false -FRONTEND_ONLY=false -SKIP_CONFIRMATION=false -SKIP_VERIFICATION=false - -while [[ $# -gt 0 ]]; do - case $1 in - -h|--help) - show_usage - exit 0 - ;; - -b|--backend-only) - BACKEND_ONLY=true - shift - ;; - -f|--frontend-only) - FRONTEND_ONLY=true - shift - ;; - -y|--yes) - SKIP_CONFIRMATION=true - shift - ;; - --no-verify) - SKIP_VERIFICATION=true - shift - ;; - *) - print_error "Unknown option: $1" - show_usage - exit 1 - ;; - esac -done - -# Override from environment variables -if [ "$SKIP_CONFIRMATION" = "true" ] || [ "$SKIP_CONFIRMATION_ENV" = "true" ]; then - SKIP_CONFIRMATION=true -fi - -if [ "$SKIP_VERIFICATION" = "true" ] || [ "$SKIP_VERIFICATION_ENV" = "true" ]; then - SKIP_VERIFICATION=true -fi - -# Print banner -echo -e "${GREEN}" -echo "==========================================" -echo " ThrillWiki Development Setup" -echo "==========================================" -echo -e "${NC}" - -print_status "Project root: $PROJECT_ROOT" - -# Confirmation prompt -if [ "$SKIP_CONFIRMATION" = false ]; then - echo "" - read -p "This will set up the development environment. Continue? (y/N): " -n 1 -r - echo "" - if [[ ! $REPLY =~ ^[Yy]$ ]]; then - print_status "Setup cancelled." - exit 0 - fi -fi - -# Check requirements -check_requirements - -# Setup components based on options -if [ "$BACKEND_ONLY" = true ]; then - print_status "Setting up backend only..." - setup_backend - setup_root_env -elif [ "$FRONTEND_ONLY" = true ]; then - print_status "Setting up frontend only..." - setup_frontend - setup_root_env -else - print_status "Setting up both backend and frontend..." - setup_backend - setup_frontend - setup_root_env -fi - -# Verify setup -if [ "$SKIP_VERIFICATION" = false ]; then - echo "" - if verify_setup; then - print_success "Development environment setup completed successfully!" - echo "" - print_status "Next steps:" - echo " 1. Edit .env files with your configuration" - echo " 2. Start development servers: ./shared/scripts/dev/start-all.sh" - echo " 3. Visit http://localhost:5174 for the frontend" - echo " 4. Visit http://localhost:8000 for the backend API" - echo "" - print_status "Happy coding! 🚀" - else - print_warning "Setup completed with issues. Please review the warnings above." - exit 1 - fi -else - print_success "Development environment setup completed!" - print_status "Skipped verification as requested." -fi \ No newline at end of file diff --git a/shared/scripts/dev/start-all.sh b/shared/scripts/dev/start-all.sh deleted file mode 100755 index 4f440c14..00000000 --- a/shared/scripts/dev/start-all.sh +++ /dev/null @@ -1,279 +0,0 @@ -#!/bin/bash - -# ThrillWiki Development Server Starter -# Starts both Django backend and Vue.js frontend servers concurrently - -set -e - -# Colors for output -RED='\033[0;31m' -GREEN='\033[0;32m' -YELLOW='\033[1;33m' -BLUE='\033[0;34m' -NC='\033[0m' # No Color - -# Script directory and project root -SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -PROJECT_ROOT="$(cd "$SCRIPT_DIR/../../../" && pwd)" - -# Configuration -BACKEND_PORT=8000 -FRONTEND_PORT=5174 -BACKEND_DIR="$PROJECT_ROOT/backend" -FRONTEND_DIR="$PROJECT_ROOT/frontend" - -# Function to print colored output -print_status() { - echo -e "${BLUE}[INFO]${NC} $1" -} - -print_success() { - echo -e "${GREEN}[SUCCESS]${NC} $1" -} - -print_warning() { - echo -e "${YELLOW}[WARNING]${NC} $1" -} - -print_error() { - echo -e "${RED}[ERROR]${NC} $1" -} - -# Function to check if a port is available -check_port() { - local port=$1 - if lsof -Pi :$port -sTCP:LISTEN -t >/dev/null ; then - return 1 - else - return 0 - fi -} - -# Function to kill process on port -kill_port() { - local port=$1 - local pid=$(lsof -ti:$port) - if [ ! -z "$pid" ]; then - print_warning "Killing process $pid on port $port" - kill -9 $pid - fi -} - -# Function to wait for service to be ready -wait_for_service() { - local url=$1 - local service_name=$2 - local max_attempts=30 - local attempt=1 - - print_status "Waiting for $service_name to be ready at $url" - - while [ $attempt -le $max_attempts ]; do - if curl -s -f "$url" > /dev/null 2>&1; then - print_success "$service_name is ready!" - return 0 - fi - - echo -n "." - sleep 2 - ((attempt++)) - done - - print_error "$service_name failed to start after $max_attempts attempts" - return 1 -} - -# Function to start backend server -start_backend() { - print_status "Starting Django backend server..." - - # Kill any existing process on backend port - kill_port $BACKEND_PORT - - # Clean up Python cache files - find "$BACKEND_DIR" -type d -name "__pycache__" -exec rm -rf {} + 2>/dev/null || true - - cd "$BACKEND_DIR" - - # Check if virtual environment exists - if [ ! -d ".venv" ]; then - print_error "Backend virtual environment not found. Please run setup-dev.sh first." - exit 1 - fi - - # Start Django server in background - print_status "Starting Django development server on port $BACKEND_PORT" - uv run manage.py runserver 0.0.0.0:$BACKEND_PORT & - BACKEND_PID=$! - - # Wait for backend to be ready - wait_for_service "http://localhost:$BACKEND_PORT/api/" "Django backend" - - cd "$PROJECT_ROOT" -} - -# Function to start frontend server -start_frontend() { - print_status "Starting Vue.js frontend server..." - - cd "$FRONTEND_DIR" - - # Check if node_modules exists - if [ ! -d "node_modules" ]; then - print_error "Frontend dependencies not installed. Please run setup-dev.sh first." - exit 1 - fi - - # Start Vue.js dev server in background - print_status "Starting Vue.js development server on port $FRONTEND_PORT" - pnpm run dev & - FRONTEND_PID=$! - - # Wait for frontend to be ready - wait_for_service "http://localhost:$FRONTEND_PORT" "Vue.js frontend" - - cd "$PROJECT_ROOT" -} - -# Function to cleanup on script exit -cleanup() { - print_warning "Shutting down development servers..." - - if [ ! -z "$BACKEND_PID" ]; then - kill $BACKEND_PID 2>/dev/null || true - fi - - if [ ! -z "$FRONTEND_PID" ]; then - kill $FRONTEND_PID 2>/dev/null || true - fi - - # Kill any remaining processes on our ports - kill_port $BACKEND_PORT - kill_port $FRONTEND_PORT - - print_success "Development servers stopped." - exit 0 -} - -# Function to show usage -show_usage() { - cat << EOF -Usage: $0 [OPTIONS] - -Start both Django backend and Vue.js frontend development servers. - -Options: - -h, --help Show this help message - -b, --backend-only Start only the backend server - -f, --frontend-only Start only the frontend server - -p, --production Start in production mode (if applicable) - --no-wait Don't wait for services to be ready - -Examples: - $0 # Start both servers - $0 --backend-only # Start only backend - $0 --frontend-only # Start only frontend - -Environment Variables: - BACKEND_PORT Backend server port (default: 8000) - FRONTEND_PORT Frontend server port (default: 5174) - -EOF -} - -# Parse command line arguments -BACKEND_ONLY=false -FRONTEND_ONLY=false -PRODUCTION=false -WAIT_FOR_SERVICES=true - -while [[ $# -gt 0 ]]; do - case $1 in - -h|--help) - show_usage - exit 0 - ;; - -b|--backend-only) - BACKEND_ONLY=true - shift - ;; - -f|--frontend-only) - FRONTEND_ONLY=true - shift - ;; - -p|--production) - PRODUCTION=true - shift - ;; - --no-wait) - WAIT_FOR_SERVICES=false - shift - ;; - *) - print_error "Unknown option: $1" - show_usage - exit 1 - ;; - esac -done - -# Override ports from environment if set -if [ ! -z "$BACKEND_PORT_ENV" ]; then - BACKEND_PORT=$BACKEND_PORT_ENV -fi - -if [ ! -z "$FRONTEND_PORT_ENV" ]; then - FRONTEND_PORT=$FRONTEND_PORT_ENV -fi - -# Set up signal handlers for graceful shutdown -trap cleanup SIGINT SIGTERM - -# Print banner -echo -e "${GREEN}" -echo "==========================================" -echo " ThrillWiki Development Environment" -echo "==========================================" -echo -e "${NC}" - -print_status "Project root: $PROJECT_ROOT" -print_status "Backend port: $BACKEND_PORT" -print_status "Frontend port: $FRONTEND_PORT" - -# Check if required tools are available -command -v uv >/dev/null 2>&1 || { print_error "uv is required but not installed. Please install uv first."; exit 1; } -command -v pnpm >/dev/null 2>&1 || { print_error "pnpm is required but not installed. Please install pnpm first."; exit 1; } -command -v curl >/dev/null 2>&1 || { print_error "curl is required but not installed."; exit 1; } - -# Start services based on options -if [ "$BACKEND_ONLY" = true ]; then - print_status "Starting backend only..." - start_backend - print_success "Backend server started successfully!" - print_status "Backend URL: http://localhost:$BACKEND_PORT" - print_status "API URL: http://localhost:$BACKEND_PORT/api/" - wait -elif [ "$FRONTEND_ONLY" = true ]; then - print_status "Starting frontend only..." - start_frontend - print_success "Frontend server started successfully!" - print_status "Frontend URL: http://localhost:$FRONTEND_PORT" - wait -else - print_status "Starting both backend and frontend servers..." - start_backend & - BACKEND_PID=$! - start_frontend & - FRONTEND_PID=$! - - print_success "Development servers started successfully!" - echo "" - print_status "Backend URL: http://localhost:$BACKEND_PORT" - print_status "API URL: http://localhost:$BACKEND_PORT/api/" - print_status "Frontend URL: http://localhost:$FRONTEND_PORT" - echo "" - print_status "Press Ctrl+C to stop all servers" - - # Wait for both processes - wait -fi \ No newline at end of file diff --git a/shared/scripts/dev_server.sh b/shared/scripts/dev_server.sh deleted file mode 100755 index 3fc96f31..00000000 --- a/shared/scripts/dev_server.sh +++ /dev/null @@ -1,147 +0,0 @@ -#!/bin/bash - -# ThrillWiki Development Server Script -# This script sets up the proper environment variables and runs the Django development server - -set -e # Exit on any error - -echo "🚀 Starting ThrillWiki Development Server..." - -# Change to the project directory (parent of scripts folder) -cd "$(dirname "$0")/.." - -# Set Django environment to local development -export DJANGO_SETTINGS_MODULE="config.django.local" - -# Core Django settings -export DEBUG="True" -export SECRET_KEY="django-insecure-dev-key-not-for-production-$(openssl rand -base64 32 | tr -d "=+/" | cut -c1-25)" - -# Allowed hosts for development -export ALLOWED_HOSTS="localhost,127.0.0.1,0.0.0.0" - -# CSRF trusted origins for development -export CSRF_TRUSTED_ORIGINS="http://localhost:8000,http://127.0.0.1:8000,https://127.0.0.1:8000" - -# Database configuration (PostgreSQL with PostGIS) -export DATABASE_URL="postgis://thrillwiki_user:thrillwiki@localhost:5432/thrillwiki_test_db" - -# Cache configuration (use locmem for development if Redis not available) -export CACHE_URL="locmemcache://" -export REDIS_URL="redis://127.0.0.1:6379/1" - -# CORS settings for API development -export CORS_ALLOW_ALL_ORIGINS="True" -export CORS_ALLOWED_ORIGINS="" - -# Email configuration for development (console backend) -export EMAIL_URL="consolemail://" - -# GeoDjango library paths for macOS (adjust if needed) -export GDAL_LIBRARY_PATH="/opt/homebrew/lib/libgdal.dylib" -export GEOS_LIBRARY_PATH="/opt/homebrew/lib/libgeos_c.dylib" - -# API rate limiting (generous for development) -export API_RATE_LIMIT_PER_MINUTE="1000" -export API_RATE_LIMIT_PER_HOUR="10000" - -# Cache settings -export CACHE_MIDDLEWARE_SECONDS="1" # Very short cache for development -export CACHE_MIDDLEWARE_KEY_PREFIX="thrillwiki_dev" - -# Social auth settings (you can set these if you have them) -# export GOOGLE_OAUTH2_CLIENT_ID="" -# export GOOGLE_OAUTH2_CLIENT_SECRET="" -# export DISCORD_CLIENT_ID="" -# export DISCORD_CLIENT_SECRET="" - -# Create necessary directories -echo "📁 Creating necessary directories..." -mkdir -p logs -mkdir -p profiles -mkdir -p media -mkdir -p staticfiles -mkdir -p static/css - -# Check if virtual environment is activated -if [[ -z "$VIRTUAL_ENV" ]] && [[ -d ".venv" ]]; then - echo "🔧 Activating virtual environment..." - source .venv/bin/activate -fi - -# Run database migrations if needed -echo "🗄️ Checking database migrations..." -if uv run manage.py migrate --check 2>/dev/null; then - echo "✅ Database migrations are up to date" -else - echo "🔄 Running database migrations..." - uv run manage.py migrate --noinput -fi -echo "Resetting database..." -if uv run manage.py seed_sample_data 2>/dev/null; then - echo "Seeding complete!" -else - echo "Seeding test data to database..." - uv run manage.py seed_sample_data -fi - -# Create superuser if it doesn't exist -echo "👤 Checking for superuser..." -if ! uv run manage.py shell -c "from django.contrib.auth import get_user_model; User = get_user_model(); exit(0 if User.objects.filter(is_superuser=True).exists() else 1)" 2>/dev/null; then - echo "👤 Creating development superuser (admin/admin)..." - uv run manage.py shell -c " -from django.contrib.auth import get_user_model -User = get_user_model() -if not User.objects.filter(username='admin').exists(): - User.objects.create_superuser('admin', 'admin@example.com', 'admin') - print('Created superuser: admin/admin') -else: - print('Superuser already exists') -" -fi - -# Collect static files for development -echo "📦 Collecting static files..." -uv run manage.py collectstatic --noinput --clear - -# Build Tailwind CSS -if command -v npm &> /dev/null; then - echo "🎨 Building Tailwind CSS..." - uv run manage.py tailwind build -else - echo "⚠️ npm not found, skipping Tailwind CSS build" -fi - -# Run system checks -echo "🔍 Running system checks..." -if uv run manage.py check; then - echo "✅ System checks passed" -else - echo "❌ System checks failed, but continuing..." -fi - -# Display environment info -echo "" -echo "🌍 Development Environment:" -echo " - Settings Module: $DJANGO_SETTINGS_MODULE" -echo " - Debug Mode: $DEBUG" -echo " - Database: PostgreSQL with PostGIS" -echo " - Cache: Local memory cache" -echo " - Admin URL: http://localhost:8000/admin/" -echo " - Admin User: admin / admin" -echo " - Silk Profiler: http://localhost:8000/silk/" -echo " - Debug Toolbar: Available on debug pages" -echo " - API Documentation: http://localhost:8000/api/docs/" -echo "" - -# Start the development server -echo "🌟 Starting Django development server on http://localhost:8000" -echo "Press Ctrl+C to stop the server" -echo "" - -# Use runserver_plus if django-extensions is available, otherwise use standard runserver -if uv run python -c "import django_extensions" 2>/dev/null; then - exec uv run manage.py runserver_plus 0.0.0.0:8000 -else - exec uv run manage.py runserver 0.0.0.0:8000 -fi diff --git a/shared/scripts/github-auth.py b/shared/scripts/github-auth.py deleted file mode 100755 index f07982f0..00000000 --- a/shared/scripts/github-auth.py +++ /dev/null @@ -1,234 +0,0 @@ -#!/usr/bin/env python3 -""" -GitHub OAuth Device Flow Authentication for ThrillWiki CI/CD -This script implements GitHub's device flow to securely obtain access tokens. -""" - -import sys -import time -import requests -import argparse -from pathlib import Path - -# GitHub OAuth App Configuration -CLIENT_ID = "Iv23liOX5Hp75AxhUvIe" -TOKEN_FILE = ".github-token" - - -def parse_response(response): - """Parse HTTP response and handle errors.""" - if response.status_code in [200, 201]: - return response.json() - elif response.status_code == 401: - print("You are not authorized. Run the `login` command.") - sys.exit(1) - else: - print(f"HTTP {response.status_code}: {response.text}") - sys.exit(1) - - -def request_device_code(): - """Request a device code from GitHub.""" - url = "https://github.com/login/device/code" - data = {"client_id": CLIENT_ID} - headers = {"Accept": "application/json"} - - response = requests.post(url, data=data, headers=headers) - return parse_response(response) - - -def request_token(device_code): - """Request an access token using the device code.""" - url = "https://github.com/login/oauth/access_token" - data = { - "client_id": CLIENT_ID, - "device_code": device_code, - "grant_type": "urn:ietf:params:oauth:grant-type:device_code", - } - headers = {"Accept": "application/json"} - - response = requests.post(url, data=data, headers=headers) - return parse_response(response) - - -def poll_for_token(device_code, interval): - """Poll GitHub for the access token after user authorization.""" - print("Waiting for authorization...") - - while True: - response = request_token(device_code) - error = response.get("error") - access_token = response.get("access_token") - - if error: - if error == "authorization_pending": - # User hasn't entered the code yet - print(".", end="", flush=True) - time.sleep(interval) - continue - elif error == "slow_down": - # Polling too fast - time.sleep(interval + 5) - continue - elif error == "expired_token": - print("\nThe device code has expired. Please run `login` again.") - sys.exit(1) - elif error == "access_denied": - print("\nLogin cancelled by user.") - sys.exit(1) - else: - print(f"\nError: {response}") - sys.exit(1) - - # Success! Save the token - token_path = Path(TOKEN_FILE) - token_path.write_text(access_token) - token_path.chmod(0o600) # Read/write for owner only - - print(f"\nToken saved to {TOKEN_FILE}") - break - - -def login(): - """Initiate the GitHub OAuth device flow login process.""" - print("Starting GitHub authentication...") - - device_response = request_device_code() - verification_uri = device_response["verification_uri"] - user_code = device_response["user_code"] - device_code = device_response["device_code"] - interval = device_response["interval"] - - print(f"\nPlease visit: {verification_uri}") - print(f"and enter code: {user_code}") - print("\nWaiting for you to complete authorization in your browser...") - - poll_for_token(device_code, interval) - print("Successfully authenticated!") - return True - - -def whoami(): - """Display information about the authenticated user.""" - token_path = Path(TOKEN_FILE) - - if not token_path.exists(): - print("You are not authorized. Run the `login` command.") - sys.exit(1) - - try: - token = token_path.read_text().strip() - except Exception as e: - print(f"Error reading token: {e}") - print("You may need to run the `login` command again.") - sys.exit(1) - - url = "https://api.github.com/user" - headers = { - "Accept": "application/vnd.github+json", - "Authorization": f"Bearer {token}", - } - - response = requests.get(url, headers=headers) - user_data = parse_response(response) - - print(f"You are authenticated as: {user_data['login']}") - print(f"Name: {user_data.get('name', 'Not set')}") - print(f"Email: {user_data.get('email', 'Not public')}") - - return user_data - - -def get_token(): - """Get the current access token if available.""" - token_path = Path(TOKEN_FILE) - - if not token_path.exists(): - return None - - try: - return token_path.read_text().strip() - except Exception: - return None - - -def validate_token(): - """Validate that the current token is still valid.""" - token = get_token() - if not token: - return False - - url = "https://api.github.com/user" - headers = { - "Accept": "application/vnd.github+json", - "Authorization": f"Bearer {token}", - } - - try: - response = requests.get(url, headers=headers) - return response.status_code == 200 - except Exception: - return False - - -def ensure_authenticated(): - """Ensure user is authenticated, prompting login if necessary.""" - if validate_token(): - return get_token() - - print("GitHub authentication required.") - login() - return get_token() - - -def logout(): - """Remove the stored access token.""" - token_path = Path(TOKEN_FILE) - - if token_path.exists(): - token_path.unlink() - print("Successfully logged out.") - else: - print("You are not currently logged in.") - - -def main(): - """Main CLI interface.""" - parser = argparse.ArgumentParser( - description="GitHub OAuth authentication for ThrillWiki CI/CD" - ) - parser.add_argument( - "command", - choices=["login", "logout", "whoami", "token", "validate"], - help="Command to execute", - ) - - if len(sys.argv) == 1: - parser.print_help() - sys.exit(1) - - args = parser.parse_args() - - if args.command == "login": - login() - elif args.command == "logout": - logout() - elif args.command == "whoami": - whoami() - elif args.command == "token": - token = get_token() - if token: - print(token) - else: - print("No token available. Run `login` first.") - sys.exit(1) - elif args.command == "validate": - if validate_token(): - print("Token is valid.") - else: - print("Token is invalid or missing.") - sys.exit(1) - - -if __name__ == "__main__": - main() diff --git a/shared/scripts/setup-vm-ci.sh b/shared/scripts/setup-vm-ci.sh deleted file mode 100755 index 20544002..00000000 --- a/shared/scripts/setup-vm-ci.sh +++ /dev/null @@ -1,268 +0,0 @@ -#!/bin/bash - -# ThrillWiki VM CI Setup Script -# This script helps set up the VM deployment system - -set -e - -# Colors for output -RED='\033[0;31m' -GREEN='\033[0;32m' -YELLOW='\033[1;33m' -BLUE='\033[0;34m' -NC='\033[0m' # No Color - -log() { - echo -e "${BLUE}[SETUP]${NC} $1" -} - -log_success() { - echo -e "${GREEN}[SUCCESS]${NC} $1" -} - -log_warning() { - echo -e "${YELLOW}[WARNING]${NC} $1" -} - -log_error() { - echo -e "${RED}[ERROR]${NC} $1" -} - -# Configuration prompts -prompt_config() { - log "Setting up ThrillWiki VM CI/CD system..." - echo - - read -p "Enter your VM IP address: " VM_IP - read -p "Enter your VM username (default: ubuntu): " VM_USER - VM_USER=${VM_USER:-ubuntu} - - read -p "Enter your GitHub repository URL: " REPO_URL - read -p "Enter your GitHub webhook secret: " WEBHOOK_SECRET - - read -p "Enter local webhook port (default: 9000): " WEBHOOK_PORT - WEBHOOK_PORT=${WEBHOOK_PORT:-9000} - - read -p "Enter VM project path (default: /home/$VM_USER/thrillwiki): " VM_PROJECT_PATH - VM_PROJECT_PATH=${VM_PROJECT_PATH:-/home/$VM_USER/thrillwiki} -} - -# Create SSH key -setup_ssh() { - log "Setting up SSH keys..." - - local ssh_key_path="$HOME/.ssh/thrillwiki_vm" - - if [ ! -f "$ssh_key_path" ]; then - ssh-keygen -t rsa -b 4096 -f "$ssh_key_path" -N "" - log_success "SSH key generated: $ssh_key_path" - - log "Please copy the following public key to your VM:" - echo "---" - cat "$ssh_key_path.pub" - echo "---" - echo - log "Run this on your VM:" - echo "mkdir -p ~/.ssh && echo '$(cat "$ssh_key_path.pub")' >> ~/.ssh/***REMOVED*** && chmod 600 ~/.ssh/***REMOVED***" - echo - read -p "Press Enter when you've added the key to your VM..." - else - log "SSH key already exists: $ssh_key_path" - fi - - # Test SSH connection - log "Testing SSH connection..." - if ssh -i "$ssh_key_path" -o ConnectTimeout=5 -o StrictHostKeyChecking=no "$VM_USER@$VM_IP" "echo 'SSH connection successful'"; then - log_success "SSH connection test passed" - else - log_error "SSH connection test failed" - exit 1 - fi -} - -# Create environment file -create_env_file() { - log "Creating webhook environment file..." - - cat > ***REMOVED***.webhook << EOF -# ThrillWiki Webhook Configuration -WEBHOOK_PORT=$WEBHOOK_PORT -WEBHOOK_SECRET=$WEBHOOK_SECRET -VM_HOST=$VM_IP -VM_PORT=22 -VM_USER=$VM_USER -VM_KEY_PATH=$HOME/.ssh/thrillwiki_vm -VM_PROJECT_PATH=$VM_PROJECT_PATH -REPO_URL=$REPO_URL -DEPLOY_BRANCH=main -EOF - - log_success "Environment file created: ***REMOVED***.webhook" -} - -# Setup VM -setup_vm() { - log "Setting up VM environment..." - - local ssh_key_path="$HOME/.ssh/thrillwiki_vm" - - # Create setup script for VM - cat > /tmp/vm_setup.sh << 'EOF' -#!/bin/bash -set -e - -echo "Setting up VM for ThrillWiki deployment..." - -# Update system -sudo apt update - -# Install required packages -sudo apt install -y git curl build-essential python3-pip lsof - -# Install UV if not present -if ! command -v uv &> /dev/null; then - echo "Installing UV..." - curl -LsSf https://astral.sh/uv/install.sh | sh - source ~/.cargo/env -fi - -# Clone repository if not present -if [ ! -d "thrillwiki" ]; then - echo "Cloning repository..." - git clone REPO_URL_PLACEHOLDER thrillwiki -fi - -cd thrillwiki - -# Install dependencies -uv sync - -# Create directories -mkdir -p logs backups - -# Make scripts executable -chmod +x scripts/*.sh - -echo "VM setup completed successfully!" -EOF - - # Replace placeholder with actual repo URL - sed -i.bak "s|REPO_URL_PLACEHOLDER|$REPO_URL|g" /tmp/vm_setup.sh - - # Copy and execute setup script on VM - scp -i "$ssh_key_path" /tmp/vm_setup.sh "$VM_USER@$VM_IP:/tmp/" - ssh -i "$ssh_key_path" "$VM_USER@$VM_IP" "bash /tmp/vm_setup.sh" - - log_success "VM setup completed" - - # Cleanup - rm /tmp/vm_setup.sh /tmp/vm_setup.sh.bak -} - -# Install systemd services -setup_services() { - log "Setting up systemd services on VM..." - - local ssh_key_path="$HOME/.ssh/thrillwiki_vm" - - # Copy service files and install them - ssh -i "$ssh_key_path" "$VM_USER@$VM_IP" << EOF -cd thrillwiki - -# Update service files with correct paths -sed -i 's|/home/ubuntu|/home/$VM_USER|g' scripts/systemd/*.service -sed -i 's|ubuntu|$VM_USER|g' scripts/systemd/*.service - -# Install services -sudo cp scripts/systemd/thrillwiki.service /etc/systemd/system/ -sudo cp scripts/systemd/thrillwiki-webhook.service /etc/systemd/system/ - -# Reload and enable services -sudo systemctl daemon-reload -sudo systemctl enable thrillwiki.service - -echo "Services installed successfully!" -EOF - - log_success "Systemd services installed" -} - -# Test deployment -test_deployment() { - log "Testing VM deployment..." - - local ssh_key_path="$HOME/.ssh/thrillwiki_vm" - - ssh -i "$ssh_key_path" "$VM_USER@$VM_IP" << EOF -cd thrillwiki -./scripts/vm-deploy.sh -EOF - - log_success "Deployment test completed" -} - -# Start webhook listener -start_webhook() { - log "Starting webhook listener..." - - if [ -f "***REMOVED***.webhook" ]; then - log "Webhook configuration found. You can start the webhook listener with:" - echo " source ***REMOVED***.webhook && python3 scripts/webhook-listener.py" - echo - log "Or run it in the background:" - echo " nohup python3 scripts/webhook-listener.py > logs/webhook.log 2>&1 &" - else - log_error "Webhook configuration not found!" - exit 1 - fi -} - -# GitHub webhook instructions -github_instructions() { - log "GitHub Webhook Setup Instructions:" - echo - echo "1. Go to your GitHub repository: $REPO_URL" - echo "2. Navigate to Settings → Webhooks" - echo "3. Click 'Add webhook'" - echo "4. Configure:" - echo " - Payload URL: http://YOUR_PUBLIC_IP:$WEBHOOK_PORT/webhook" - echo " - Content type: application/json" - echo " - Secret: $WEBHOOK_SECRET" - echo " - Events: Just the push event" - echo "5. Click 'Add webhook'" - echo - log_warning "Make sure port $WEBHOOK_PORT is open on your firewall!" -} - -# Main setup flow -main() { - log "ThrillWiki VM CI/CD Setup" - echo "==========================" - echo - - # Create logs directory - mkdir -p logs - - # Get configuration - prompt_config - - # Setup steps - setup_ssh - create_env_file - setup_vm - setup_services - test_deployment - - # Final instructions - echo - log_success "Setup completed successfully!" - echo - start_webhook - echo - github_instructions - - log "Setup log saved to: logs/setup.log" -} - -# Run main function and log output -main "$@" 2>&1 | tee logs/setup.log \ No newline at end of file diff --git a/shared/scripts/start-servers.sh b/shared/scripts/start-servers.sh deleted file mode 100755 index 7f91befd..00000000 --- a/shared/scripts/start-servers.sh +++ /dev/null @@ -1,575 +0,0 @@ -#!/bin/bash - -# ThrillWiki Server Start Script -# Stops any running servers, clears caches, runs migrations, and starts both servers -# Works whether servers are currently running or not -# Usage: ./start-servers.sh - -set -e # Exit on any error - -# Global variables for process management -BACKEND_PID="" -FRONTEND_PID="" -CLEANUP_PERFORMED=false - -# Colors for output -RED='\033[0;31m' -GREEN='\033[0;32m' -YELLOW='\033[1;33m' -BLUE='\033[0;34m' -NC='\033[0m' # No Color - -# Script directory and project root -SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)" -BACKEND_DIR="$PROJECT_ROOT/backend" -FRONTEND_DIR="$PROJECT_ROOT/frontend" - -# Function to print colored output -print_status() { - echo -e "${BLUE}[INFO]${NC} $1" -} - -print_success() { - echo -e "${GREEN}[SUCCESS]${NC} $1" -} - -print_warning() { - echo -e "${YELLOW}[WARNING]${NC} $1" -} - -print_error() { - echo -e "${RED}[ERROR]${NC} $1" -} - -# Function for graceful shutdown -graceful_shutdown() { - if [ "$CLEANUP_PERFORMED" = true ]; then - return 0 - fi - - CLEANUP_PERFORMED=true - - print_warning "Received shutdown signal - performing graceful shutdown..." - - # Disable further signal handling to prevent recursive calls - trap - INT TERM - - # Kill backend server if running - if [ -n "$BACKEND_PID" ] && kill -0 "$BACKEND_PID" 2>/dev/null; then - print_status "Stopping backend server (PID: $BACKEND_PID)..." - kill -TERM "$BACKEND_PID" 2>/dev/null || true - - # Wait up to 10 seconds for graceful shutdown - local count=0 - while [ $count -lt 10 ] && kill -0 "$BACKEND_PID" 2>/dev/null; do - sleep 1 - count=$((count + 1)) - done - - # Force kill if still running - if kill -0 "$BACKEND_PID" 2>/dev/null; then - print_warning "Force killing backend server..." - kill -KILL "$BACKEND_PID" 2>/dev/null || true - fi - print_success "Backend server stopped" - else - print_status "Backend server not running or already stopped" - fi - - # Kill frontend server if running - if [ -n "$FRONTEND_PID" ] && kill -0 "$FRONTEND_PID" 2>/dev/null; then - print_status "Stopping frontend server (PID: $FRONTEND_PID)..." - kill -TERM "$FRONTEND_PID" 2>/dev/null || true - - # Wait up to 10 seconds for graceful shutdown - local count=0 - while [ $count -lt 10 ] && kill -0 "$FRONTEND_PID" 2>/dev/null; do - sleep 1 - count=$((count + 1)) - done - - # Force kill if still running - if kill -0 "$FRONTEND_PID" 2>/dev/null; then - print_warning "Force killing frontend server..." - kill -KILL "$FRONTEND_PID" 2>/dev/null || true - fi - print_success "Frontend server stopped" - else - print_status "Frontend server not running or already stopped" - fi - - # Clear PID files if they exist - if [ -f "$PROJECT_ROOT/shared/logs/backend.pid" ]; then - rm -f "$PROJECT_ROOT/shared/logs/backend.pid" - fi - if [ -f "$PROJECT_ROOT/shared/logs/frontend.pid" ]; then - rm -f "$PROJECT_ROOT/shared/logs/frontend.pid" - fi - - print_success "Graceful shutdown completed" - exit 0 -} - -# Function to kill processes by pattern -kill_processes() { - local pattern="$1" - local description="$2" - - print_status "Checking for $description processes..." - - # Find and kill processes - local pids=$(pgrep -f "$pattern" 2>/dev/null || true) - - if [ -n "$pids" ]; then - print_status "Found $description processes, stopping them..." - echo "$pids" | xargs kill -TERM 2>/dev/null || true - sleep 2 - - # Force kill if still running - local remaining_pids=$(pgrep -f "$pattern" 2>/dev/null || true) - if [ -n "$remaining_pids" ]; then - print_warning "Force killing remaining $description processes..." - echo "$remaining_pids" | xargs kill -KILL 2>/dev/null || true - fi - - print_success "$description processes stopped" - else - print_status "No $description processes found (this is fine)" - fi -} - -# Function to clear Django cache -clear_django_cache() { - print_status "Clearing Django cache..." - - cd "$BACKEND_DIR" - - # Clear Django cache - if command -v uv >/dev/null 2>&1; then - if ! uv run manage.py clear_cache 2>clear_cache_error.log; then - print_error "Django clear_cache command failed:" - cat clear_cache_error.log - rm -f clear_cache_error.log - exit 1 - else - rm -f clear_cache_error.log - fi - else - print_error "uv not found! Please install uv first." - exit 1 - fi - - # Remove Python cache files - find . -type d -name "__pycache__" -exec rm -rf {} + 2>/dev/null || true - find . -name "*.pyc" -delete 2>/dev/null || true - find . -name "*.pyo" -delete 2>/dev/null || true - - print_success "Django cache cleared" -} - -# Function to clear frontend cache -clear_frontend_cache() { - print_status "Clearing frontend cache..." - - cd "$FRONTEND_DIR" - - # Remove node_modules/.cache if it exists - if [ -d "node_modules/.cache" ]; then - rm -rf node_modules/.cache - print_status "Removed node_modules/.cache" - fi - - # Remove .nuxt cache if it exists (for Nuxt projects) - if [ -d ".nuxt" ]; then - rm -rf .nuxt - print_status "Removed .nuxt cache" - fi - - # Remove dist/build directories - if [ -d "dist" ]; then - rm -rf dist - print_status "Removed dist directory" - fi - - if [ -d "build" ]; then - rm -rf build - print_status "Removed build directory" - fi - - # Clear pnpm cache - if command -v pnpm >/dev/null 2>&1; then - pnpm store prune 2>/dev/null || print_warning "Could not prune pnpm store" - else - print_error "pnpm not found! Please install pnpm first." - exit 1 - fi - - print_success "Frontend cache cleared" -} - -# Function to run Django migrations -run_migrations() { - print_status "Running Django migrations..." - - cd "$BACKEND_DIR" - - # Check for pending migrations - if uv run python manage.py showmigrations --plan | grep -q "\[ \]"; then - print_status "Pending migrations found, applying..." - uv run python manage.py migrate - print_success "Migrations applied successfully" - else - print_status "No pending migrations found" - fi - - # Run any custom management commands if needed - # uv run python manage.py collectstatic --noinput --clear 2>/dev/null || print_warning "collectstatic failed or not needed" -} - -# Function to start backend server -start_backend() { - print_status "Starting Django backend server with runserver_plus (verbose output)..." - - cd "$BACKEND_DIR" - - # Start Django development server with runserver_plus for enhanced features and verbose output - print_status "Running: uv run python manage.py runserver_plus 8000 --verbosity=2" - uv run python manage.py runserver_plus 8000 --verbosity=2 & - BACKEND_PID=$! - - # Make sure the background process can receive signals - disown -h "$BACKEND_PID" 2>/dev/null || true - - # Wait a moment and check if it started successfully - sleep 3 - if kill -0 $BACKEND_PID 2>/dev/null; then - print_success "Backend server started (PID: $BACKEND_PID)" - echo $BACKEND_PID > ../shared/logs/backend.pid - else - print_error "Failed to start backend server" - return 1 - fi -} - -# Function to start frontend server -start_frontend() { - print_status "Starting frontend server with verbose output..." - - cd "$FRONTEND_DIR" - - # Install dependencies if node_modules doesn't exist or package.json is newer - if [ ! -d "node_modules" ] || [ "package.json" -nt "node_modules" ]; then - print_status "Installing/updating frontend dependencies..." - pnpm install - fi - - # Start frontend development server using Vite with explicit port, auto-open, and verbose output - # --port 5173: Use standard Vite port - # --open: Automatically open browser when ready - # --host localhost: Ensure it binds to localhost - # --debug: Enable debug logging - print_status "Starting Vite development server with verbose output and auto-browser opening..." - print_status "Running: pnpm vite --port 5173 --open --host localhost --debug" - pnpm vite --port 5173 --open --host localhost --debug & - FRONTEND_PID=$! - - # Make sure the background process can receive signals - disown -h "$FRONTEND_PID" 2>/dev/null || true - - # Wait a moment and check if it started successfully - sleep 3 - if kill -0 $FRONTEND_PID 2>/dev/null; then - print_success "Frontend server started (PID: $FRONTEND_PID) - browser should open automatically" - echo $FRONTEND_PID > ../shared/logs/frontend.pid - else - print_error "Failed to start frontend server" - return 1 - fi -} - -# Function to detect operating system -detect_os() { - case "$(uname -s)" in - Darwin*) echo "macos";; - Linux*) echo "linux";; - *) echo "unknown";; - esac -} - -# Function to open browser on the appropriate OS -open_browser() { - local url="$1" - local os=$(detect_os) - - print_status "Opening browser to $url..." - - case "$os" in - "macos") - if command -v open >/dev/null 2>&1; then - open "$url" 2>/dev/null || print_warning "Failed to open browser automatically" - else - print_warning "Cannot open browser: 'open' command not available" - fi - ;; - "linux") - if command -v xdg-open >/dev/null 2>&1; then - xdg-open "$url" 2>/dev/null || print_warning "Failed to open browser automatically" - else - print_warning "Cannot open browser: 'xdg-open' command not available" - fi - ;; - *) - print_warning "Cannot open browser automatically: Unsupported operating system" - ;; - esac -} - -# Function to verify frontend is responding (simplified since port is known) -verify_frontend_ready() { - local frontend_url="http://localhost:5173" - local max_checks=15 - local check=0 - - print_status "Verifying frontend server is responding at $frontend_url..." - - while [ $check -lt $max_checks ]; do - local response_code=$(curl -s -o /dev/null -w "%{http_code}" "$frontend_url" 2>/dev/null) - if [ "$response_code" = "200" ] || [ "$response_code" = "301" ] || [ "$response_code" = "302" ] || [ "$response_code" = "404" ]; then - print_success "Frontend server is responding (HTTP $response_code)" - return 0 - fi - - if [ $((check % 3)) -eq 0 ]; then - print_status "Waiting for frontend to respond... (attempt $((check + 1))/$max_checks)" - fi - sleep 2 - check=$((check + 1)) - done - - print_warning "Frontend may still be starting up" - return 1 -} - -# Function to verify servers are responding -verify_servers_ready() { - print_status "Verifying both servers are responding..." - - # Check backend - local backend_ready=false - local frontend_ready=false - local max_checks=10 - local check=0 - - while [ $check -lt $max_checks ]; do - # Check backend - if [ "$backend_ready" = false ]; then - local backend_response=$(curl -s -o /dev/null -w "%{http_code}" "http://localhost:8000" 2>/dev/null) - if [ "$backend_response" = "200" ] || [ "$backend_response" = "301" ] || [ "$backend_response" = "302" ] || [ "$backend_response" = "404" ]; then - print_success "Backend server is responding (HTTP $backend_response)" - backend_ready=true - fi - fi - - # Check frontend - if [ "$frontend_ready" = false ]; then - local frontend_response=$(curl -s -o /dev/null -w "%{http_code}" "http://localhost:5173" 2>/dev/null) - if [ "$frontend_response" = "200" ] || [ "$frontend_response" = "301" ] || [ "$frontend_response" = "302" ] || [ "$frontend_response" = "404" ]; then - print_success "Frontend server is responding (HTTP $frontend_response)" - frontend_ready=true - fi - fi - - # Both ready? - if [ "$backend_ready" = true ] && [ "$frontend_ready" = true ]; then - print_success "Both servers are responding!" - return 0 - fi - - sleep 2 - check=$((check + 1)) - done - - # Show status of what's working - if [ "$backend_ready" = true ]; then - print_success "Backend is ready at http://localhost:8000" - else - print_warning "Backend may still be starting up" - fi - - if [ "$frontend_ready" = true ]; then - print_success "Frontend is ready at http://localhost:5173" - else - print_warning "Frontend may still be starting up" - fi -} - -# Function to create logs directory if it doesn't exist -ensure_logs_dir() { - local logs_dir="$PROJECT_ROOT/shared/logs" - if [ ! -d "$logs_dir" ]; then - mkdir -p "$logs_dir" - print_status "Created logs directory: $logs_dir" - fi -} - -# Function to validate project structure -validate_project() { - if [ ! -d "$BACKEND_DIR" ]; then - print_error "Backend directory not found: $BACKEND_DIR" - exit 1 - fi - - if [ ! -d "$FRONTEND_DIR" ]; then - print_error "Frontend directory not found: $FRONTEND_DIR" - exit 1 - fi - - if [ ! -f "$BACKEND_DIR/manage.py" ]; then - print_error "Django manage.py not found in: $BACKEND_DIR" - exit 1 - fi - - if [ ! -f "$FRONTEND_DIR/package.json" ]; then - print_error "Frontend package.json not found in: $FRONTEND_DIR" - exit 1 - fi -} - -# Function to kill processes using specific ports -kill_port_processes() { - local port="$1" - local description="$2" - - print_status "Checking for processes using port $port ($description)..." - - # Find processes using the specific port - local pids=$(lsof -ti :$port 2>/dev/null || true) - - if [ -n "$pids" ]; then - print_warning "Found processes using port $port, killing them..." - echo "$pids" | xargs kill -TERM 2>/dev/null || true - sleep 2 - - # Force kill if still running - local remaining_pids=$(lsof -ti :$port 2>/dev/null || true) - if [ -n "$remaining_pids" ]; then - print_warning "Force killing remaining processes on port $port..." - echo "$remaining_pids" | xargs kill -KILL 2>/dev/null || true - fi - - print_success "Port $port cleared" - else - print_status "Port $port is available" - fi -} - -# Function to check and clear required ports -check_and_clear_ports() { - print_status "Checking and clearing required ports..." - - # Kill processes using our specific ports - kill_port_processes 8000 "Django backend" - kill_port_processes 5173 "Frontend Vite" -} - -# Main execution function -main() { - print_status "ThrillWiki Server Start Script Starting..." - print_status "This script works whether servers are currently running or not." - print_status "Project root: $PROJECT_ROOT" - - # Set up signal traps EARLY - before any long-running operations - print_status "Setting up signal handlers for graceful shutdown..." - trap 'graceful_shutdown' INT TERM - - # Validate project structure - validate_project - - # Ensure logs directory exists - ensure_logs_dir - - # Check and clear ports - check_and_clear_ports - - # Kill existing server processes (if any) - print_status "=== Stopping Any Running Servers ===" - print_status "Note: It's perfectly fine if no servers are currently running" - kill_processes "manage.py runserver" "Django backend" - kill_processes "pnpm.*dev\|npm.*dev\|yarn.*dev\|node.*dev" "Frontend development" - kill_processes "uvicorn\|gunicorn" "Python web servers" - - # Clear caches - print_status "=== Clearing Caches ===" - clear_django_cache - clear_frontend_cache - - # Run migrations - print_status "=== Running Migrations ===" - run_migrations - - # Start servers - print_status "=== Starting Servers ===" - - # Start backend first - if start_backend; then - print_success "Backend server is running" - else - print_error "Failed to start backend server" - exit 1 - fi - - # Start frontend - if start_frontend; then - print_success "Frontend server is running" - else - print_error "Failed to start frontend server" - print_status "Backend server is still running" - exit 1 - fi - - # Verify servers are responding - print_status "=== Verifying Servers ===" - verify_servers_ready - - # Final status - print_status "=== Server Status ===" - print_success "✅ Backend server: http://localhost:8000 (Django with runserver_plus)" - print_success "✅ Frontend server: http://localhost:5173 (Vite with verbose output)" - print_status "🌐 Browser should have opened automatically via Vite --open" - print_status "🔧 To stop servers, use: kill \$(cat $PROJECT_ROOT/shared/logs/backend.pid) \$(cat $PROJECT_ROOT/shared/logs/frontend.pid)" - print_status "📋 Both servers are running with verbose output directly in your terminal" - - print_success "🚀 All servers started successfully with full verbose output!" - - # Keep the script running and wait for signals - wait_for_servers -} - -# Wait for servers function to keep script running and handle signals -wait_for_servers() { - print_status "🚀 Servers are running! Press Ctrl+C for graceful shutdown." - print_status "📋 Backend: http://localhost:8000 | Frontend: http://localhost:5173" - - # Keep the script alive and wait for signals - while [ "$CLEANUP_PERFORMED" != true ]; do - # Check if both servers are still running - if [ -n "$BACKEND_PID" ] && ! kill -0 "$BACKEND_PID" 2>/dev/null; then - print_error "Backend server has stopped unexpectedly" - graceful_shutdown - break - fi - - if [ -n "$FRONTEND_PID" ] && ! kill -0 "$FRONTEND_PID" 2>/dev/null; then - print_error "Frontend server has stopped unexpectedly" - graceful_shutdown - break - fi - - # Use shorter sleep and check for signals more frequently - sleep 1 - done -} - -# Run main function (no traps set up initially) -main "$@" \ No newline at end of file diff --git a/shared/scripts/systemd/thrillwiki-automation.env.example b/shared/scripts/systemd/thrillwiki-automation.env.example deleted file mode 100644 index 1c1d84c3..00000000 --- a/shared/scripts/systemd/thrillwiki-automation.env.example +++ /dev/null @@ -1,296 +0,0 @@ -# ThrillWiki Automation Service Environment Configuration -# Copy this file to thrillwiki-automation***REMOVED*** and customize for your environment -# -# Security Note: This file should have restricted permissions (600) as it may contain -# sensitive information like GitHub Personal Access Tokens - -# [AWS-SECRET-REMOVED]==================================== -# PROJECT CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# Base project directory (usually auto-detected) -# PROJECT_DIR=/home/ubuntu/thrillwiki - -# Service name for systemd integration -# SERVICE_NAME=thrillwiki - -# [AWS-SECRET-REMOVED]==================================== -# GITHUB REPOSITORY CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# GitHub repository remote name -# GITHUB_REPO=origin - -# Branch to pull from -# GITHUB_BRANCH=main - -# GitHub Personal Access Token (PAT) - Required for private repositories -# Generate at: https://github.com/settings/tokens -# Required permissions: repo (Full control of private repositories) -# GITHUB_TOKEN=ghp_your_personal_access_token_here - -# GitHub token file location (alternative to GITHUB_TOKEN) -# GITHUB_TOKEN_FILE=/home/ubuntu/thrillwiki/.github-pat -GITHUB_PAT_FILE=/home/ubuntu/thrillwiki/.github-pat - -# [AWS-SECRET-REMOVED]==================================== -# AUTOMATION TIMING CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# Repository pull interval in seconds (default: 300 = 5 minutes) -# PULL_INTERVAL=300 - -# Health check interval in seconds (default: 60 = 1 minute) -# HEALTH_CHECK_INTERVAL=60 - -# Server startup timeout in seconds (default: 120 = 2 minutes) -# STARTUP_TIMEOUT=120 - -# Restart delay after failure in seconds (default: 10) -# RESTART_DELAY=10 - -# [AWS-SECRET-REMOVED]==================================== -# LOGGING CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# Log directory (default: project_dir/logs) -# LOG_DIR=/home/ubuntu/thrillwiki/logs - -# Log file path -# LOG_[AWS-SECRET-REMOVED]proof-automation.log - -# Maximum log file size in bytes (default: 10485760 = 10MB) -# MAX_LOG_SIZE=10485760 - -# Lock file location to prevent multiple instances -# LOCK_FILE=/tmp/thrillwiki-bulletproof.lock - -# [AWS-SECRET-REMOVED]==================================== -# DEVELOPMENT SERVER CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# Server host address (default: 0.0.0.0 for all interfaces) -# SERVER_HOST=0.0.0.0 - -# Server port (default: 8000) -# SERVER_PORT=8000 - -# [AWS-SECRET-REMOVED]==================================== -# DEPLOYMENT CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# Deployment preset (dev, prod, demo, testing) -# DEPLOYMENT_PRESET=dev - -# Repository URL for deployment -# GITHUB_REPO_URL=https://github.com/username/repository.git - -# Repository branch for deployment -# GITHUB_REPO_BRANCH=main - -# Enable Django project setup during deployment -# DJANGO_PROJECT_SETUP=true - -# Skip GitHub authentication setup -# SKIP_GITHUB_SETUP=false - -# Skip repository configuration -# SKIP_REPO_CONFIG=false - -# Skip systemd service setup -# SKIP_SERVICE_SETUP=false - -# Force deployment even if target exists -# FORCE_DEPLOY=false - -# Remote deployment user -# REMOTE_USER=ubuntu - -# Remote deployment host -# REMOTE_HOST= - -# Remote deployment port -# REMOTE_PORT=22 - -# Remote deployment path -# REMOTE_PATH=/home/ubuntu/thrillwiki - -# [AWS-SECRET-REMOVED]==================================== -# DJANGO CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# Django settings module -# DJANGO_SETTINGS_MODULE=thrillwiki.settings - -# Python path -# PYTHONPATH=/home/ubuntu/thrillwiki - -# UV executable path (for systems where UV is not in standard PATH) -# UV_EXECUTABLE=/home/ubuntu/.local/bin/uv - -# Django development server command (used by bulletproof automation) -# DJANGO_RUNSERVER_CMD=uv run manage.py tailwind runserver - -# Enable development server auto-cleanup (kills processes on port 8000) -# AUTO_CLEANUP_PROCESSES=true - -# [AWS-SECRET-REMOVED]==================================== -# ADVANCED CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# GitHub authentication script location -# GITHUB_AUTH_[AWS-SECRET-REMOVED]ithub-auth.py - -# Enable verbose logging (true/false) -# VERBOSE_LOGGING=false - -# Enable debug mode for troubleshooting (true/false) -# DEBUG_MODE=false - -# Custom git remote URL (overrides GITHUB_REPO if set) -# CUSTOM_GIT_REMOTE=https://github.com/username/repository.git - -# Email notifications for critical failures (requires email configuration) -# NOTIFICATION_EMAIL=admin@example.com - -# Maximum consecutive failures before alerting (default: 5) -# MAX_CONSECUTIVE_FAILURES=5 - -# Enable automatic dependency updates (true/false, default: true) -# AUTO_UPDATE_DEPENDENCIES=true - -# Enable automatic migrations on code changes (true/false, default: true) -# AUTO_MIGRATE=true - -# Enable automatic static file collection (true/false, default: true) -# AUTO_COLLECTSTATIC=true - -# [AWS-SECRET-REMOVED]==================================== -# SECURITY CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# GitHub authentication method (token|ssh|https) -# Default: token (uses GITHUB_TOKEN or GITHUB_TOKEN_FILE) -# GITHUB_AUTH_METHOD=token - -# SSH key path for git operations (when using ssh auth method) -# SSH_KEY_PATH=/home/ubuntu/.ssh/***REMOVED*** - -# Git user configuration for commits -# GIT_USER_NAME="ThrillWiki Automation" -# GIT_USER_EMAIL="automation@thrillwiki.local" - -# [AWS-SECRET-REMOVED]==================================== -# MONITORING AND HEALTH CHECKS -# [AWS-SECRET-REMOVED]==================================== - -# Health check URL to verify server is running -# HEALTH_CHECK_URL=http://localhost:8000/health/ - -# Health check timeout in seconds -# HEALTH_CHECK_TIMEOUT=30 - -# Enable system resource monitoring (true/false) -# MONITOR_RESOURCES=true - -# Memory usage threshold for warnings (in MB) -# MEMORY_WARNING_THRESHOLD=1024 - -# CPU usage threshold for warnings (percentage) -# CPU_WARNING_THRESHOLD=80 - -# Disk usage threshold for warnings (percentage) -# DISK_WARNING_THRESHOLD=90 - -# [AWS-SECRET-REMOVED]==================================== -# INTEGRATION SETTINGS -# [AWS-SECRET-REMOVED]==================================== - -# Webhook integration (if using thrillwiki-webhook service) -# WEBHOOK_INTEGRATION=true - -# Slack webhook URL for notifications (optional) -# SLACK_WEBHOOK_URL=https://hooks.slack.com/services/your/webhook/url - -# Discord webhook URL for notifications (optional) -# DISCORD_WEBHOOK_URL=https://discord.com/api/webhooks/your/webhook/url - -# [AWS-SECRET-REMOVED]==================================== -# ENVIRONMENT AND SYSTEM CONFIGURATION -# [AWS-SECRET-REMOVED]==================================== - -# System PATH additions (for UV and other tools) -# ADDITIONAL_PATH=/home/ubuntu/.local/bin:/home/ubuntu/.cargo/bin - -# Python environment configuration -# PYTHON_EXECUTABLE=python3 - -# Enable verbose logging for debugging -# VERBOSE_LOGGING=false - -# Debug mode for development -# DEBUG_MODE=false - -# Service restart configuration -# MAX_RESTART_ATTEMPTS=3 -# RESTART_COOLDOWN=300 - -# Health check configuration -# HEALTH_CHECK_URL=http://localhost:8000/health/ -# HEALTH_CHECK_TIMEOUT=30 - -# System resource monitoring -# MONITOR_RESOURCES=true -# MEMORY_WARNING_THRESHOLD=1024 -# CPU_WARNING_THRESHOLD=80 -# DISK_WARNING_THRESHOLD=90 - -# Lock file configuration -# LOCK_FILE=/tmp/thrillwiki-bulletproof.lock - -# GitHub authentication method (token|ssh|https) -# GITHUB_AUTH_METHOD=token - -# SSH key path for git operations (when using ssh auth method) -# SSH_KEY_PATH=/home/ubuntu/.ssh/***REMOVED*** - -# Git user configuration for commits -# GIT_USER_NAME="ThrillWiki Automation" -# GIT_USER_EMAIL="automation@thrillwiki.local" - -# [AWS-SECRET-REMOVED]==================================== -# USAGE EXAMPLES -# [AWS-SECRET-REMOVED]==================================== - -# Example 1: Basic setup with GitHub PAT -# GITHUB_TOKEN=ghp_your_token_here -# PULL_INTERVAL=300 -# AUTO_MIGRATE=true - -# Example 2: Enhanced monitoring setup -# HEALTH_CHECK_INTERVAL=30 -# MONITOR_RESOURCES=true -# NOTIFICATION_EMAIL=admin@thrillwiki.com -# SLACK_WEBHOOK_URL=https://hooks.slack.com/services/your/webhook - -# Example 3: Development environment with frequent pulls -# PULL_INTERVAL=60 -# DEBUG_MODE=true -# VERBOSE_LOGGING=true -# AUTO_UPDATE_DEPENDENCIES=true - -# [AWS-SECRET-REMOVED]==================================== -# INSTALLATION NOTES -# [AWS-SECRET-REMOVED]==================================== - -# 1. Copy this file: cp thrillwiki-automation***REMOVED***.example thrillwiki-automation***REMOVED*** -# 2. Set secure permissions: chmod 600 thrillwiki-automation***REMOVED*** -# 3. Customize the settings above for your environment -# 4. Enable the service: sudo systemctl enable thrillwiki-automation -# 5. Start the service: sudo systemctl start thrillwiki-automation -# 6. Check status: sudo systemctl status thrillwiki-automation -# 7. View logs: sudo journalctl -u thrillwiki-automation -f - -# For security, ensure only the ubuntu user can read this file: -# sudo chown ubuntu:ubuntu thrillwiki-automation***REMOVED*** -# sudo chmod 600 thrillwiki-automation***REMOVED*** \ No newline at end of file diff --git a/shared/scripts/systemd/thrillwiki-automation.service b/shared/scripts/systemd/thrillwiki-automation.service deleted file mode 100644 index 4fe2b85e..00000000 --- a/shared/scripts/systemd/thrillwiki-automation.service +++ /dev/null @@ -1,106 +0,0 @@ -[Unit] -Description=ThrillWiki Bulletproof Development Automation -Documentation=man:thrillwiki-automation(8) -After=network.target -Wants=network.target -Before=thrillwiki.service -PartOf=thrillwiki.service - -[Service] -Type=simple -User=ubuntu -Group=ubuntu -[AWS-SECRET-REMOVED] -[AWS-SECRET-REMOVED]s/vm/bulletproof-automation.sh -ExecStop=/bin/kill -TERM $MAINPID -ExecReload=/bin/kill -HUP $MAINPID -Restart=always -RestartSec=10 -KillMode=mixed -KillSignal=SIGTERM -TimeoutStopSec=60 -TimeoutStartSec=120 -StartLimitIntervalSec=300 -StartLimitBurst=3 - -# Environment variables - Load from file for security -EnvironmentFile=-[AWS-SECRET-REMOVED]thrillwiki-automation***REMOVED*** -Environment=PROJECT_DIR=/home/ubuntu/thrillwiki -Environment=SERVICE_NAME=thrillwiki-automation -Environment=GITHUB_REPO=origin -Environment=GITHUB_BRANCH=main -Environment=PULL_INTERVAL=300 -Environment=HEALTH_CHECK_INTERVAL=60 -Environment=STARTUP_TIMEOUT=120 -Environment=RESTART_DELAY=10 -Environment=LOG_DIR=/home/ubuntu/thrillwiki/logs -Environment=MAX_LOG_SIZE=10485760 -Environment=SERVER_HOST=0.0.0.0 -Environment=SERVER_PORT=8000 -Environment=PATH=/home/ubuntu/.local/bin:/home/ubuntu/.cargo/bin:/usr/local/bin:/usr/bin:/bin -[AWS-SECRET-REMOVED]llwiki - -# Security settings - Enhanced hardening for automation script -NoNewPrivileges=true -PrivateTmp=true -ProtectSystem=strict -ProtectHome=true -ProtectKernelTunables=true -ProtectKernelModules=true -ProtectControlGroups=true -RestrictSUIDSGID=true -RestrictRealtime=true -RestrictNamespaces=true -LockPersonality=true -MemoryDenyWriteExecute=false -RemoveIPC=true - -# File system permissions - Allow access to necessary directories -ReadWritePaths=/home/ubuntu/thrillwiki -[AWS-SECRET-REMOVED]ogs -[AWS-SECRET-REMOVED]edia -[AWS-SECRET-REMOVED]taticfiles -[AWS-SECRET-REMOVED]ploads -ReadWritePaths=/home/ubuntu/.cache -ReadWritePaths=/tmp -ReadOnlyPaths=/home/ubuntu/.github-pat -ReadOnlyPaths=/home/ubuntu/.ssh -ReadOnlyPaths=/home/ubuntu/.local - -# Resource limits - Appropriate for automation script -LimitNOFILE=65536 -LimitNPROC=1024 -MemoryMax=512M -CPUQuota=50% -TasksMax=256 - -# Timeouts -WatchdogSec=300 - -# Logging configuration -StandardOutput=journal -StandardError=journal -SyslogIdentifier=thrillwiki-automation -SyslogFacility=daemon -SyslogLevel=info -SyslogLevelPrefix=true - -# Enhanced logging for debugging -# Ensure logs are captured and rotated properly -LogsDirectory=thrillwiki-automation -LogsDirectoryMode=0755 -StateDirectory=thrillwiki-automation -StateDirectoryMode=0755 -RuntimeDirectory=thrillwiki-automation -RuntimeDirectoryMode=0755 - -# Capabilities - Minimal required capabilities -CapabilityBoundingSet= -AmbientCapabilities= -PrivateDevices=true -ProtectClock=true -ProtectHostname=true - -[Install] -WantedBy=multi-user.target -Also=thrillwiki.service \ No newline at end of file diff --git a/shared/scripts/systemd/thrillwiki-deployment.service b/shared/scripts/systemd/thrillwiki-deployment.service deleted file mode 100644 index f16acb42..00000000 --- a/shared/scripts/systemd/thrillwiki-deployment.service +++ /dev/null @@ -1,103 +0,0 @@ -[Unit] -Description=ThrillWiki Complete Deployment Automation Service -Documentation=man:thrillwiki-deployment(8) -After=network.target network-online.target -Wants=network-online.target -Before=thrillwiki-smart-deploy.timer -PartOf=thrillwiki-smart-deploy.timer - -[Service] -Type=simple -User=thrillwiki -Group=thrillwiki -[AWS-SECRET-REMOVED]wiki -[AWS-SECRET-REMOVED]ripts/vm/deploy-automation.sh -ExecStop=/bin/kill -TERM $MAINPID -ExecReload=/bin/kill -HUP $MAINPID -Restart=always -RestartSec=30 -KillMode=mixed -KillSignal=SIGTERM -TimeoutStopSec=120 -TimeoutStartSec=180 -StartLimitIntervalSec=600 -StartLimitBurst=3 - -# Environment variables - Load from file for security and preset integration -EnvironmentFile=-[AWS-SECRET-REMOVED]emd/thrillwiki-deployment***REMOVED*** -Environment=PROJECT_DIR=/home/thrillwiki/thrillwiki -Environment=SERVICE_NAME=thrillwiki-deployment -Environment=GITHUB_REPO=origin -Environment=GITHUB_BRANCH=main -Environment=DEPLOYMENT_MODE=automated -Environment=LOG_DIR=/home/thrillwiki/thrillwiki/logs -Environment=MAX_LOG_SIZE=10485760 -Environment=SERVER_HOST=0.0.0.0 -Environment=SERVER_PORT=8000 -Environment=PATH=/home/thrillwiki/.local/bin:/home/thrillwiki/.cargo/bin:/usr/local/bin:/usr/bin:/bin -[AWS-SECRET-REMOVED]thrillwiki - -# Security settings - Enhanced hardening for deployment automation -NoNewPrivileges=true -PrivateTmp=true -ProtectSystem=strict -ProtectHome=true -ProtectKernelTunables=true -ProtectKernelModules=true -ProtectControlGroups=true -RestrictSUIDSGID=true -RestrictRealtime=true -RestrictNamespaces=true -LockPersonality=true -MemoryDenyWriteExecute=false -RemoveIPC=true - -# File system permissions - Allow access to necessary directories -[AWS-SECRET-REMOVED]ki -[AWS-SECRET-REMOVED]ki/logs -[AWS-SECRET-REMOVED]ki/media -[AWS-SECRET-REMOVED]ki/staticfiles -[AWS-SECRET-REMOVED]ki/uploads -ReadWritePaths=/home/thrillwiki/.cache -ReadWritePaths=/tmp -ReadOnlyPaths=/home/thrillwiki/.github-pat -ReadOnlyPaths=/home/thrillwiki/.ssh -ReadOnlyPaths=/home/thrillwiki/.local - -# Resource limits - Appropriate for deployment automation -LimitNOFILE=65536 -LimitNPROC=2048 -MemoryMax=1G -CPUQuota=75% -TasksMax=512 - -# Timeouts and watchdog -WatchdogSec=600 -RuntimeMaxSec=0 - -# Logging configuration -StandardOutput=journal -StandardError=journal -SyslogIdentifier=thrillwiki-deployment -SyslogFacility=daemon -SyslogLevel=info -SyslogLevelPrefix=true - -# Enhanced logging for debugging -LogsDirectory=thrillwiki-deployment -LogsDirectoryMode=0755 -StateDirectory=thrillwiki-deployment -StateDirectoryMode=0755 -RuntimeDirectory=thrillwiki-deployment -RuntimeDirectoryMode=0755 - -# Capabilities - Minimal required capabilities -CapabilityBoundingSet= -AmbientCapabilities= -PrivateDevices=true -ProtectClock=true -ProtectHostname=true - -[Install] -WantedBy=multi-user.target -Also=thrillwiki-smart-deploy.timer \ No newline at end of file diff --git a/shared/scripts/systemd/thrillwiki-smart-deploy.service b/shared/scripts/systemd/thrillwiki-smart-deploy.service deleted file mode 100644 index b7d4721c..00000000 --- a/shared/scripts/systemd/thrillwiki-smart-deploy.service +++ /dev/null @@ -1,76 +0,0 @@ -[Unit] -Description=ThrillWiki Smart Deployment Service -Documentation=man:thrillwiki-smart-deploy(8) -After=network.target thrillwiki-deployment.service -Wants=network.target -PartOf=thrillwiki-smart-deploy.timer - -[Service] -Type=oneshot -User=thrillwiki -Group=thrillwiki -[AWS-SECRET-REMOVED]wiki -[AWS-SECRET-REMOVED]ripts/smart-deploy.sh -TimeoutStartSec=300 -TimeoutStopSec=60 - -# Environment variables - Load from deployment configuration -EnvironmentFile=-[AWS-SECRET-REMOVED]emd/thrillwiki-deployment***REMOVED*** -Environment=PROJECT_DIR=/home/thrillwiki/thrillwiki -Environment=SERVICE_NAME=thrillwiki-smart-deploy -Environment=DEPLOYMENT_MODE=timer -Environment=LOG_DIR=/home/thrillwiki/thrillwiki/logs -Environment=PATH=/home/thrillwiki/.local/bin:/home/thrillwiki/.cargo/bin:/usr/local/bin:/usr/bin:/bin -[AWS-SECRET-REMOVED]thrillwiki - -# Security settings - Inherited from main deployment service -NoNewPrivileges=true -PrivateTmp=true -ProtectSystem=strict -ProtectHome=true -ProtectKernelTunables=true -ProtectKernelModules=true -ProtectControlGroups=true -RestrictSUIDSGID=true -RestrictRealtime=true -RestrictNamespaces=true -LockPersonality=true -MemoryDenyWriteExecute=false -RemoveIPC=true - -# File system permissions -[AWS-SECRET-REMOVED]ki -[AWS-SECRET-REMOVED]ki/logs -[AWS-SECRET-REMOVED]ki/media -[AWS-SECRET-REMOVED]ki/staticfiles -[AWS-SECRET-REMOVED]ki/uploads -ReadWritePaths=/home/thrillwiki/.cache -ReadWritePaths=/tmp -ReadOnlyPaths=/home/thrillwiki/.github-pat -ReadOnlyPaths=/home/thrillwiki/.ssh -ReadOnlyPaths=/home/thrillwiki/.local - -# Resource limits -LimitNOFILE=65536 -LimitNPROC=1024 -MemoryMax=512M -CPUQuota=50% -TasksMax=256 - -# Logging configuration -StandardOutput=journal -StandardError=journal -SyslogIdentifier=thrillwiki-smart-deploy -SyslogFacility=daemon -SyslogLevel=info -SyslogLevelPrefix=true - -# Capabilities -CapabilityBoundingSet= -AmbientCapabilities= -PrivateDevices=true -ProtectClock=true -ProtectHostname=true - -[Install] -WantedBy=multi-user.target \ No newline at end of file diff --git a/shared/scripts/systemd/thrillwiki-smart-deploy.timer b/shared/scripts/systemd/thrillwiki-smart-deploy.timer deleted file mode 100644 index b4f848cf..00000000 --- a/shared/scripts/systemd/thrillwiki-smart-deploy.timer +++ /dev/null @@ -1,17 +0,0 @@ -[Unit] -Description=ThrillWiki Smart Deployment Timer -Documentation=man:thrillwiki-smart-deploy(8) -Requires=thrillwiki-smart-deploy.service -After=thrillwiki-deployment.service - -[Timer] -# Default timer configuration (can be overridden by environment) -OnBootSec=5min -OnUnitActiveSec=5min -Unit=thrillwiki-smart-deploy.service -Persistent=true -RandomizedDelaySec=30sec - -[Install] -WantedBy=timers.target -Also=thrillwiki-smart-deploy.service \ No newline at end of file diff --git a/shared/scripts/systemd/thrillwiki-webhook.service b/shared/scripts/systemd/thrillwiki-webhook.service deleted file mode 100644 index 7864dc68..00000000 --- a/shared/scripts/systemd/thrillwiki-webhook.service +++ /dev/null @@ -1,39 +0,0 @@ -[Unit] -Description=ThrillWiki GitHub Webhook Listener -After=network.target -Wants=network.target - -[Service] -Type=simple -User=ubuntu -Group=ubuntu -[AWS-SECRET-REMOVED] -ExecStart=/usr/bin/python3 /home/ubuntu/thrillwiki/scripts/webhook-listener.py -Restart=always -RestartSec=10 - -# Environment variables -Environment=WEBHOOK_PORT=9000 -Environment=WEBHOOK_SECRET=your_webhook_secret_here -Environment=VM_HOST=localhost -Environment=VM_PORT=22 -Environment=VM_USER=ubuntu -Environment=VM_KEY_PATH=/home/ubuntu/.ssh/***REMOVED*** -Environment=VM_PROJECT_PATH=/home/ubuntu/thrillwiki -Environment=REPO_URL=https://github.com/YOUR_USERNAME/thrillwiki_django_no_react.git -Environment=DEPLOY_BRANCH=main - -# Security settings -NoNewPrivileges=true -PrivateTmp=true -ProtectSystem=strict -ProtectHome=true -[AWS-SECRET-REMOVED]ogs - -# Logging -StandardOutput=journal -StandardError=journal -SyslogIdentifier=thrillwiki-webhook - -[Install] -WantedBy=multi-user.target \ No newline at end of file diff --git a/shared/scripts/systemd/thrillwiki.service b/shared/scripts/systemd/thrillwiki.service deleted file mode 100644 index 61255148..00000000 --- a/shared/scripts/systemd/thrillwiki.service +++ /dev/null @@ -1,45 +0,0 @@ -[Unit] -Description=ThrillWiki Django Application -After=network.target postgresql.service -Wants=network.target -Requires=postgresql.service - -[Service] -Type=forking -User=ubuntu -Group=ubuntu -[AWS-SECRET-REMOVED] -[AWS-SECRET-REMOVED]s/ci-start.sh -ExecStop=/bin/kill -TERM $MAINPID -ExecReload=/bin/kill -HUP $MAINPID -[AWS-SECRET-REMOVED]ngo.pid -Restart=always -RestartSec=10 - -# Environment variables -Environment=DJANGO_SETTINGS_MODULE=thrillwiki.settings -[AWS-SECRET-REMOVED]llwiki -Environment=PATH=/home/ubuntu/.cargo/bin:/usr/local/bin:/usr/bin:/bin - -# Security settings -NoNewPrivileges=true -PrivateTmp=true -ProtectSystem=strict -ProtectHome=true -[AWS-SECRET-REMOVED]ogs -[AWS-SECRET-REMOVED]edia -[AWS-SECRET-REMOVED]taticfiles -[AWS-SECRET-REMOVED]ploads - -# Resource limits -LimitNOFILE=65536 -TimeoutStartSec=300 -TimeoutStopSec=30 - -# Logging -StandardOutput=journal -StandardError=journal -SyslogIdentifier=thrillwiki - -[Install] -WantedBy=multi-user.target \ No newline at end of file diff --git a/shared/scripts/test-automation.sh b/shared/scripts/test-automation.sh deleted file mode 100755 index 29da47e0..00000000 --- a/shared/scripts/test-automation.sh +++ /dev/null @@ -1,175 +0,0 @@ -#!/bin/bash - -# ThrillWiki Automation Test Script -# This script validates all automation components without actually running them - -set -e - -# Colors for output -RED='\033[0;31m' -GREEN='\033[0;32m' -YELLOW='\033[1;33m' -BLUE='\033[0;34m' -NC='\033[0m' - -log() { - echo -e "${BLUE}[TEST]${NC} $1" -} - -log_success() { - echo -e "${GREEN}[✓]${NC} $1" -} - -log_warning() { - echo -e "${YELLOW}[!]${NC} $1" -} - -log_error() { - echo -e "${RED}[✗]${NC} $1" -} - -# Test counters -TESTS_PASSED=0 -TESTS_FAILED=0 -TESTS_TOTAL=0 - -test_case() { - local name="$1" - local command="$2" - - ((TESTS_TOTAL++)) - log "Testing: $name" - - if eval "$command" >/dev/null 2>&1; then - log_success "$name" - ((TESTS_PASSED++)) - else - log_error "$name" - ((TESTS_FAILED++)) - fi -} - -test_case_with_output() { - local name="$1" - local command="$2" - local expected_pattern="$3" - - ((TESTS_TOTAL++)) - log "Testing: $name" - - local output - if output=$(eval "$command" 2>&1); then - if [[ -n "$expected_pattern" && ! "$output" =~ $expected_pattern ]]; then - log_error "$name (unexpected output)" - ((TESTS_FAILED++)) - else - log_success "$name" - ((TESTS_PASSED++)) - fi - else - log_error "$name (command failed)" - ((TESTS_FAILED++)) - fi -} - -log "🧪 Starting ThrillWiki Automation Tests" -echo "======================================" - -# Test 1: File Permissions -log "\n📁 Testing File Permissions..." -test_case "CI start script is executable" "[ -x scripts/ci-start.sh ]" -test_case "VM deploy script is executable" "[ -x scripts/vm-deploy.sh ]" -test_case "Webhook listener is executable" "[ -x scripts/webhook-listener.py ]" -test_case "VM manager is executable" "[ -x scripts/unraid/vm-manager.py ]" -test_case "Complete automation script is executable" "[ -x scripts/unraid/setup-complete-automation.sh ]" - -# Test 2: Script Syntax -log "\n🔍 Testing Script Syntax..." -test_case "CI start script syntax" "bash -n scripts/ci-start.sh" -test_case "VM deploy script syntax" "bash -n scripts/vm-deploy.sh" -test_case "Setup VM CI script syntax" "bash -n scripts/setup-vm-ci.sh" -test_case "Complete automation script syntax" "bash -n scripts/unraid/setup-complete-automation.sh" -test_case "Webhook listener Python syntax" "python3 -m py_compile scripts/webhook-listener.py" -test_case "VM manager Python syntax" "python3 -m py_compile scripts/unraid/vm-manager.py" - -# Test 3: Help Functions -log "\n❓ Testing Help Functions..." -test_case_with_output "VM manager help" "python3 scripts/unraid/vm-manager.py --help" "usage:" -test_case_with_output "Webhook listener help" "python3 scripts/webhook-listener.py --help" "usage:" -test_case_with_output "VM deploy script usage" "scripts/vm-deploy.sh invalid 2>&1" "Usage:" - -# Test 4: Configuration Validation -log "\n⚙️ Testing Configuration Validation..." -test_case_with_output "Webhook listener test mode" "python3 scripts/webhook-listener.py --test" "Configuration validation" - -# Test 5: Directory Structure -log "\n📂 Testing Directory Structure..." -test_case "Scripts directory exists" "[ -d scripts ]" -test_case "Unraid scripts directory exists" "[ -d scripts/unraid ]" -test_case "Systemd directory exists" "[ -d scripts/systemd ]" -test_case "Docs directory exists" "[ -d docs ]" -test_case "Logs directory created" "[ -d logs ]" - -# Test 6: Required Files -log "\n📄 Testing Required Files..." -test_case "ThrillWiki service file exists" "[ -f scripts/systemd/thrillwiki.service ]" -test_case "Webhook service file exists" "[ -f scripts/systemd/thrillwiki-webhook.service ]" -test_case "VM deployment setup doc exists" "[ -f docs/VM_DEPLOYMENT_SETUP.md ]" -test_case "Unraid automation doc exists" "[ -f docs/UNRAID_COMPLETE_AUTOMATION.md ]" -test_case "CI README exists" "[ -f CI_README.md ]" - -# Test 7: Python Dependencies -log "\n🐍 Testing Python Dependencies..." -test_case "Python 3 available" "command -v python3" -test_case "Requests module available" "python3 -c 'import requests'" -test_case "JSON module available" "python3 -c 'import json'" -test_case "OS module available" "python3 -c 'import os'" -test_case "Subprocess module available" "python3 -c 'import subprocess'" - -# Test 8: System Dependencies -log "\n🔧 Testing System Dependencies..." -test_case "SSH command available" "command -v ssh" -test_case "SCP command available" "command -v scp" -test_case "Bash available" "command -v bash" -test_case "Git available" "command -v git" - -# Test 9: UV Package Manager -log "\n📦 Testing UV Package Manager..." -if command -v uv >/dev/null 2>&1; then - log_success "UV package manager is available" - ((TESTS_PASSED++)) - test_case "UV version check" "uv --version" -else - log_warning "UV package manager not found (will be installed during setup)" - ((TESTS_PASSED++)) -fi -((TESTS_TOTAL++)) - -# Test 10: Django Project Structure -log "\n🌟 Testing Django Project Structure..." -test_case "Django manage.py exists" "[ -f manage.py ]" -test_case "Django settings module exists" "[ -f thrillwiki/settings.py ]" -test_case "PyProject.toml exists" "[ -f pyproject.toml ]" - -# Final Results -echo -log "📊 Test Results Summary" -echo "======================" -echo "Total Tests: $TESTS_TOTAL" -echo "Passed: $TESTS_PASSED" -echo "Failed: $TESTS_FAILED" - -if [ $TESTS_FAILED -eq 0 ]; then - echo - log_success "🎉 All tests passed! The automation system is ready." - echo - log "Next steps:" - echo "1. For complete automation: ./scripts/unraid/setup-complete-automation.sh" - echo "2. For manual setup: ./scripts/setup-vm-ci.sh" - echo "3. Read documentation: docs/UNRAID_COMPLETE_AUTOMATION.md" - exit 0 -else - echo - log_error "❌ Some tests failed. Please check the issues above." - exit 1 -fi \ No newline at end of file diff --git a/shared/scripts/unraid/.claude/settings.local.json b/shared/scripts/unraid/.claude/settings.local.json deleted file mode 100644 index d8e549f1..00000000 --- a/shared/scripts/unraid/.claude/settings.local.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "permissions": { - "additionalDirectories": [ - "/Users/talor/thrillwiki_django_no_react" - ], - "allow": [ - "Bash(uv run:*)" - ] - } -} \ No newline at end of file diff --git a/shared/scripts/unraid/README-NON-INTERACTIVE.md b/shared/scripts/unraid/README-NON-INTERACTIVE.md deleted file mode 100644 index e87dab8f..00000000 --- a/shared/scripts/unraid/README-NON-INTERACTIVE.md +++ /dev/null @@ -1,150 +0,0 @@ -# Non-Interactive Mode for ThrillWiki Automation - -The ThrillWiki automation script supports a non-interactive mode (`-y` flag) that allows you to run the entire setup process without any user prompts. This is perfect for: - -- **CI/CD pipelines** -- **Automated deployments** -- **Scripted environments** -- **Remote execution** - -## Prerequisites - -1. **Saved Configuration**: You must have run the script interactively at least once to create the saved configuration file (`.thrillwiki-config`). - -2. **Environment Variables**: Set the required environment variables for sensitive credentials that aren't saved to disk. - -## Required Environment Variables - -### Always Required -- `UNRAID_PASSWORD` - Your Unraid server password - -### Required if GitHub API is enabled -- `GITHUB_TOKEN` - Your GitHub personal access token (if using token auth method) - -### Required if Webhooks are enabled -- `WEBHOOK_SECRET` - Your GitHub webhook secret - -## Usage Examples - -### Basic Non-Interactive Setup -```bash -# Set required credentials -export UNRAID_PASSWORD="your_unraid_password" -export GITHUB_TOKEN="your_github_token" -export WEBHOOK_SECRET="your_webhook_secret" - -# Run in non-interactive mode -./setup-complete-automation.sh -y -``` - -### CI/CD Pipeline Example -```bash -#!/bin/bash -set -e - -# Load credentials from secure environment -export UNRAID_PASSWORD="$UNRAID_CREDS_PASSWORD" -export GITHUB_TOKEN="$GITHUB_API_TOKEN" -export WEBHOOK_SECRET="$WEBHOOK_SECRET_KEY" - -# Deploy with no user interaction -cd scripts/unraid -./setup-complete-automation.sh -y -``` - -### Docker/Container Example -```bash -# Run from container with environment file -docker run --env-file ***REMOVED***.secrets \ - -v $(pwd):/workspace \ - your-automation-container \ - /workspace/scripts/unraid/setup-complete-automation.sh -y -``` - -## Error Handling - -The script will exit with clear error messages if: - -- No saved configuration is found -- Required environment variables are missing -- OAuth tokens have expired (non-interactive mode cannot refresh them) - -### Common Issues - -**❌ No saved configuration** -``` -[ERROR] No saved configuration found. Cannot run in non-interactive mode. -[ERROR] Please run the script without -y flag first to create initial configuration. -``` -**Solution**: Run `./setup-complete-automation.sh` interactively first. - -**❌ Missing password** -``` -[ERROR] UNRAID_PASSWORD environment variable not set. -[ERROR] For non-interactive mode, set: export UNRAID_PASSWORD='your_password' -``` -**Solution**: Set the `UNRAID_PASSWORD` environment variable. - -**❌ Expired OAuth token** -``` -[ERROR] OAuth token expired and cannot refresh in non-interactive mode -[ERROR] Please run without -y flag to re-authenticate with GitHub -``` -**Solution**: Run interactively to refresh OAuth token, or switch to personal access token method. - -## Security Best Practices - -1. **Never commit credentials to version control** -2. **Use secure environment variable storage** (CI/CD secret stores, etc.) -3. **Rotate credentials regularly** -4. **Use minimal required permissions** for tokens -5. **Clear environment variables** after use if needed: - ```bash - unset UNRAID_PASSWORD GITHUB_TOKEN WEBHOOK_SECRET - ``` - -## Advanced Usage - -### Combining with Reset Modes -```bash -# Reset VM only and redeploy non-interactively -export UNRAID_PASSWORD="password" -./setup-complete-automation.sh --reset-vm -y -``` - -### Using with Different Authentication Methods -```bash -# For OAuth method (no GITHUB_TOKEN needed if valid) -export UNRAID_PASSWORD="password" -export WEBHOOK_SECRET="secret" -./setup-complete-automation.sh -y - -# For personal access token method -export UNRAID_PASSWORD="password" -export GITHUB_TOKEN="ghp_xxxx" -export WEBHOOK_SECRET="secret" -./setup-complete-automation.sh -y -``` - -### Environment File Pattern -```bash -# Create ***REMOVED***.automation (don't commit this!) -cat > ***REMOVED***.automation << EOF -UNRAID_PASSWORD=your_password_here -GITHUB_TOKEN=your_token_here -WEBHOOK_SECRET=your_secret_here -EOF - -# Use it -source ***REMOVED***.automation -./setup-complete-automation.sh -y - -# Clean up -rm ***REMOVED***.automation -``` - -## Integration Examples - -See `example-non-interactive.sh` for a complete working example that you can customize for your needs. - -The non-interactive mode makes it easy to integrate ThrillWiki deployment into your existing automation workflows while maintaining security and reliability. diff --git a/shared/scripts/unraid/README-template-deployment.md b/shared/scripts/unraid/README-template-deployment.md deleted file mode 100644 index 9b32e500..00000000 --- a/shared/scripts/unraid/README-template-deployment.md +++ /dev/null @@ -1,385 +0,0 @@ -# ThrillWiki Template-Based VM Deployment - -This guide explains how to use the new **template-based VM deployment** system that dramatically speeds up VM creation by using a pre-configured Ubuntu template instead of autoinstall ISOs. - -## Overview - -### Traditional Approach (Slow) -- Create autoinstall ISO from scratch -- Boot VM from ISO (20-30 minutes) -- Wait for Ubuntu installation -- Configure system packages and dependencies - -### Template Approach (Fast ⚡) -- Copy pre-configured VM disk from template -- Boot VM from template disk (2-5 minutes) -- System is already configured with Ubuntu, packages, and dependencies - -## Prerequisites - -1. **Template VM**: You must have a VM named `thrillwiki-template-ubuntu` on your Unraid server -2. **Template Configuration**: The template should be pre-configured with: - - Ubuntu 24.04 LTS - - Python 3, Git, PostgreSQL, Nginx - - UV package manager (optional but recommended) - - Basic system configuration - -## Template VM Setup - -### Creating the Template VM - -1. **Create the template VM manually** on your Unraid server: - - Name: `thrillwiki-template-ubuntu` - - Install Ubuntu 24.04 LTS - - Configure with 4GB RAM, 2 vCPUs (can be adjusted later) - -2. **Configure the template** by SSH'ing into it and running: - ```bash - # Update system - sudo apt update && sudo apt upgrade -y - - # Install required packages - sudo apt install -y git curl build-essential python3-pip python3-venv - sudo apt install -y postgresql postgresql-contrib nginx - - # Install UV (Python package manager) - curl -LsSf https://astral.sh/uv/install.sh | sh - source ~/.cargo/env - - # Create thrillwiki user with password 'thrillwiki' - sudo useradd -m -s /bin/bash thrillwiki || true - echo 'thrillwiki:thrillwiki' | sudo chpasswd - sudo usermod -aG sudo thrillwiki - - # Setup SSH key for thrillwiki user - # First, generate your SSH key on your Mac: - # ssh-keygen -t rsa -b 4096 -f ~/.ssh/thrillwiki_vm -N "" -C "thrillwiki-template-vm-access" - # Then copy the public key to the template VM: - sudo mkdir -p /home/thrillwiki/.ssh - echo "YOUR_PUBLIC_KEY_FROM_~/.ssh/thrillwiki_vm.pub" | sudo tee /home/thrillwiki/.ssh/***REMOVED*** - sudo chown -R thrillwiki:thrillwiki /home/thrillwiki/.ssh - sudo chmod 700 /home/thrillwiki/.ssh - sudo chmod 600 /home/thrillwiki/.ssh/***REMOVED*** - - # Configure PostgreSQL - sudo systemctl enable postgresql - sudo systemctl start postgresql - - # Configure Nginx - sudo systemctl enable nginx - - # Clean up for template - sudo apt autoremove -y - sudo apt autoclean - history -c && history -w - - # Shutdown template - sudo shutdown now - ``` - -3. **Verify template** is stopped and ready: - ```bash - ./template-utils.sh status # Should show "shut off" - ``` - -## Quick Start - -### Step 0: Set Up SSH Key (First Time Only) - -**IMPORTANT**: Before using template deployment, set up your SSH key: - -```bash -# Generate and configure SSH key -./scripts/unraid/setup-ssh-key.sh - -# Follow the instructions to add the public key to your template VM -``` - -See `TEMPLATE_VM_SETUP.md` for complete template VM setup instructions. - -### Using the Utility Script - -The easiest way to work with template VMs is using the utility script: - -```bash -# Check if template is ready -./template-utils.sh check - -# Get template information -./template-utils.sh info - -# Deploy a new VM from template -./template-utils.sh deploy my-thrillwiki-vm - -# Copy template to new VM (without full deployment) -./template-utils.sh copy my-vm-name - -# List all template-based VMs -./template-utils.sh list -``` - -### Using Python Scripts Directly - -For more control, use the Python scripts: - -```bash -# Set environment variables -export UNRAID_HOST="your.unraid.server.ip" -export UNRAID_USER="root" -export VM_NAME="my-thrillwiki-vm" -export REPO_URL="owner/repository-name" - -# Deploy VM from template -python3 main_template.py deploy - -# Just create VM without ThrillWiki setup -python3 main_template.py setup - -# Get VM status and IP -python3 main_template.py status -python3 main_template.py ip - -# Manage template -python3 main_template.py template info -python3 main_template.py template check -``` - -## File Structure - -### New Template-Based Files - -``` -scripts/unraid/ -├── template_manager.py # Template VM management -├── vm_manager_template.py # Template-based VM manager -├── main_template.py # Template deployment orchestrator -├── template-utils.sh # Quick utility commands -├── deploy-thrillwiki-template.sh # Optimized deployment script -├── thrillwiki-vm-template-simple.xml # VM XML without autoinstall ISO -└── README-template-deployment.md # This documentation -``` - -### Original Files (Still Available) - -``` -scripts/unraid/ -├── main.py # Original autoinstall approach -├── vm_manager.py # Original VM manager -├── deploy-thrillwiki.sh # Original deployment script -└── thrillwiki-vm-template.xml # Original XML with autoinstall -``` - -## Commands Reference - -### Template Management - -```bash -# Check template status -./template-utils.sh status -python3 template_manager.py check - -# Get template information -./template-utils.sh info -python3 template_manager.py info - -# List VMs created from template -./template-utils.sh list -python3 template_manager.py list - -# Update template instructions -./template-utils.sh update -python3 template_manager.py update -``` - -### VM Deployment - -```bash -# Complete deployment (VM + ThrillWiki) -./template-utils.sh deploy VM_NAME -python3 main_template.py deploy - -# VM setup only -python3 main_template.py setup - -# Individual operations -python3 main_template.py create -python3 main_template.py start -python3 main_template.py stop -python3 main_template.py delete -``` - -### VM Information - -```bash -# Get VM status -python3 main_template.py status - -# Get VM IP and connection info -python3 main_template.py ip - -# Get detailed VM information -python3 main_template.py info -``` - -## Environment Variables - -Configure these in your `***REMOVED***.unraid` file or export them: - -```bash -# Required -UNRAID_HOST="192.168.1.100" # Your Unraid server IP -UNRAID_USER="root" # Unraid SSH user -VM_NAME="thrillwiki-vm" # Name for new VM - -# Optional VM Configuration -VM_MEMORY="4096" # Memory in MB -VM_VCPUS="2" # Number of vCPUs -VM_DISK_SIZE="50" # Disk size in GB (for reference) -VM_IP="dhcp" # IP configuration (dhcp or static IP) - -# ThrillWiki Configuration -REPO_URL="owner/repository-name" # GitHub repository -GITHUB_TOKEN="ghp_xxxxx" # GitHub token (optional) -``` - -## Advantages of Template Approach - -### Speed ⚡ -- **VM Creation**: 2-5 minutes vs 20-30 minutes -- **Boot Time**: Instant boot vs full Ubuntu installation -- **Total Deployment**: ~10 minutes vs ~45 minutes - -### Reliability 🔒 -- **Pre-tested**: Template is already configured and tested -- **Consistent**: All VMs start from identical base -- **No Installation Failures**: No autoinstall ISO issues - -### Efficiency 💾 -- **Disk Space**: Copy-on-write QCOW2 format -- **Network**: No ISO downloads during deployment -- **Resources**: Less CPU usage during creation - -## Troubleshooting - -### Template Not Found -``` -❌ Template VM disk not found at: /mnt/user/domains/thrillwiki-template-ubuntu/vdisk1.qcow2 -``` - -**Solution**: Create the template VM first or verify the path. - -### Template VM Running -``` -⚠️ Template VM is currently running! -``` - -**Solution**: Stop the template VM before creating new instances: -```bash -ssh root@unraid-host "virsh shutdown thrillwiki-template-ubuntu" -``` - -### SSH Connection Issues -``` -❌ Cannot connect to Unraid server -``` - -**Solutions**: -1. Verify `UNRAID_HOST` is correct -2. Ensure SSH key authentication is set up -3. Check network connectivity - -### Template Disk Corruption - -If template VM gets corrupted: -1. Start template VM and fix issues -2. Or recreate template VM from scratch -3. Update template: `./template-utils.sh update` - -## Template Maintenance - -### Updating the Template - -Periodically update your template: - -1. **Start template VM** on Unraid -2. **SSH into template** and update: - ```bash - sudo apt update && sudo apt upgrade -y - sudo apt autoremove -y && sudo apt autoclean - - # Update UV if installed - ~/.cargo/bin/uv --version - - # Clear history - history -c && history -w - ``` -3. **Shutdown template VM** -4. **Verify update**: `./template-utils.sh check` - -### Template Best Practices - -- Keep template VM stopped when not maintaining it -- Update template monthly or before major deployments -- Test template by creating a test VM before important deployments -- Document any custom configurations in the template - -## Migration Guide - -### From Autoinstall to Template - -1. **Create your template VM** following the setup guide above -2. **Test template deployment**: - ```bash - ./template-utils.sh deploy test-vm - ``` -3. **Update your automation scripts** to use template approach -4. **Keep autoinstall scripts** as backup for special cases - -### Switching Between Approaches - -You can use both approaches as needed: - -```bash -# Template-based (fast) -python3 main_template.py deploy - -# Autoinstall-based (traditional) -python3 main.py setup -``` - -## Integration with CI/CD - -The template approach integrates perfectly with your existing CI/CD: - -```bash -# In your automation scripts -export UNRAID_HOST="your-server" -export VM_NAME="thrillwiki-$(date +%s)" -export REPO_URL="your-org/thrillwiki" - -# Deploy quickly -./scripts/unraid/template-utils.sh deploy "$VM_NAME" - -# VM is ready in minutes instead of 30+ minutes -``` - -## FAQ - -**Q: Can I use both template and autoinstall approaches?** -A: Yes! Keep both. Use template for speed, autoinstall for special configurations. - -**Q: How much disk space does template copying use?** -A: QCOW2 copy-on-write format means copies only store differences, saving space. - -**Q: What if I need different Ubuntu versions?** -A: Create multiple template VMs (e.g., `thrillwiki-template-ubuntu-22`, `thrillwiki-template-ubuntu-24`). - -**Q: Can I customize the template VM configuration?** -A: Yes! The template VM is just a regular VM. Customize it as needed. - -**Q: Is this approach secure?** -A: Yes. Each VM gets a fresh copy and can be configured independently. - ---- - -This template-based approach should make your VM deployments much faster and more reliable! 🚀 diff --git a/shared/scripts/unraid/README.md b/shared/scripts/unraid/README.md deleted file mode 100644 index b2b8cf17..00000000 --- a/shared/scripts/unraid/README.md +++ /dev/null @@ -1,131 +0,0 @@ -# ThrillWiki Unraid VM Automation - -This directory contains scripts and configuration files for automating the creation and deployment of ThrillWiki VMs on Unraid servers using Ubuntu autoinstall. - -## Files - -- **`vm-manager.py`** - Main VM management script with direct kernel boot support -- **`thrillwiki-vm-template.xml`** - VM XML configuration template for libvirt -- **`cloud-init-template.yaml`** - Ubuntu autoinstall configuration template -- **`validate-autoinstall.py`** - Validation script for autoinstall configuration - -## Key Features - -### Direct Kernel Boot Approach -The system now uses direct kernel boot instead of GRUB-based boot for maximum reliability: - -1. **Kernel Extraction**: Automatically extracts Ubuntu kernel and initrd files from the ISO -2. **Direct Boot**: VM boots directly using extracted kernel with explicit autoinstall parameters -3. **Reliable Autoinstall**: Kernel cmdline explicitly specifies `autoinstall ds=nocloud-net;s=cdrom:/` - -### Schema-Compliant Configuration -The autoinstall configuration has been validated against Ubuntu's official schema: - -- ✅ Proper network configuration structure -- ✅ Correct storage layout specification -- ✅ Valid shutdown configuration -- ✅ Schema-compliant field types and values - -## Usage - -### Environment Variables -Set these environment variables before running: - -```bash -export UNRAID_HOST="your-unraid-server" -export UNRAID_USER="root" -export UNRAID_PASSWORD="your-password" -export SSH_PUBLIC_KEY="your-ssh-public-key" -export REPO_URL="https://github.com/your-username/thrillwiki.git" -export VM_IP="192.168.20.20" # or "dhcp" for DHCP -export VM_GATEWAY="192.168.20.1" -``` - -### Basic Operations - -```bash -# Create and configure VM -./vm-manager.py create - -# Start the VM -./vm-manager.py start - -# Check VM status -./vm-manager.py status - -# Get VM IP address -./vm-manager.py ip - -# Complete setup (create + start + get IP) -./vm-manager.py setup - -# Stop the VM -./vm-manager.py stop - -# Delete VM and all files -./vm-manager.py delete -``` - -### Configuration Validation - -```bash -# Validate autoinstall configuration -./validate-autoinstall.py -``` - -## How It Works - -### VM Creation Process - -1. **Extract Kernel**: Mount Ubuntu ISO and extract `vmlinuz` and `initrd` from `/casper/` -2. **Create Cloud-Init ISO**: Generate configuration ISO with autoinstall settings -3. **Generate VM XML**: Create libvirt VM configuration with direct kernel boot -4. **Define VM**: Register VM as persistent domain in libvirt - -### Boot Process - -1. **Direct Kernel Boot**: VM starts using extracted kernel and initrd directly -2. **Autoinstall Trigger**: Kernel cmdline forces Ubuntu installer into autoinstall mode -3. **Cloud-Init Data**: NoCloud datasource provides configuration from CD-ROM -4. **Automated Setup**: Ubuntu installs and configures ThrillWiki automatically - -### Network Configuration - -The system supports both static IP and DHCP configurations: - -- **Static IP**: Set `VM_IP` to desired IP address (e.g., "192.168.20.20") -- **DHCP**: Set `VM_IP` to "dhcp" for automatic IP assignment - -## Troubleshooting - -### VM Console Access -Connect to VM console to monitor autoinstall progress: -```bash -ssh root@unraid-server -virsh console thrillwiki-vm -``` - -### Check VM Logs -View autoinstall logs inside the VM: -```bash -# After VM is accessible -ssh ubuntu@vm-ip -sudo journalctl -u cloud-init -tail -f /var/log/cloud-init.log -``` - -### Validation Errors -If autoinstall validation fails, check: -1. YAML syntax in `cloud-init-template.yaml` -2. Required fields according to Ubuntu schema -3. Proper data types for configuration values - -## Architecture Benefits - -1. **Reliable Boot**: Direct kernel boot eliminates GRUB-related issues -2. **Schema Compliance**: Configuration validated against official Ubuntu schema -3. **Predictable Behavior**: Explicit kernel parameters ensure consistent autoinstall -4. **Clean Separation**: VM configuration, cloud-init, and kernel files are properly organized -5. **Easy Maintenance**: Modular design allows independent updates of components - -This implementation provides a robust, schema-compliant solution for automated ThrillWiki deployment on Unraid VMs. diff --git a/shared/scripts/unraid/TEMPLATE_VM_SETUP.md b/shared/scripts/unraid/TEMPLATE_VM_SETUP.md deleted file mode 100644 index 941b957c..00000000 --- a/shared/scripts/unraid/TEMPLATE_VM_SETUP.md +++ /dev/null @@ -1,245 +0,0 @@ -# Template VM Setup Instructions - -## Prerequisites for Template-Based Deployment - -Before using the template-based deployment system, you need to: - -1. **Create the template VM** named `thrillwiki-template-ubuntu` on your Unraid server -2. **Configure SSH access** with your public key -3. **Set up the template** with all required software - -## Step 1: Create Template VM on Unraid - -1. Create a new VM on your Unraid server: - - **Name**: `thrillwiki-template-ubuntu` - - **OS**: Ubuntu 24.04 LTS - - **Memory**: 4GB (you can adjust this later for instances) - - **vCPUs**: 2 (you can adjust this later for instances) - - **Disk**: 50GB (sufficient for template) - -2. Install Ubuntu 24.04 LTS using standard installation - -## Step 2: Configure Template VM - -SSH into your template VM and run the following setup: - -### Create thrillwiki User -```bash -# Create the thrillwiki user with password 'thrillwiki' -sudo useradd -m -s /bin/bash thrillwiki -echo 'thrillwiki:thrillwiki' | sudo chpasswd -sudo usermod -aG sudo thrillwiki - -# Switch to thrillwiki user for remaining setup -sudo su - thrillwiki -``` - -### Set Up SSH Access -**IMPORTANT**: Add your SSH public key to the template VM: - -```bash -# Create .ssh directory -mkdir -p ~/.ssh -chmod 700 ~/.ssh - -# Add your public key (replace with your actual public key) -echo "YOUR_PUBLIC_KEY_HERE" >> ~/.ssh/***REMOVED*** -chmod 600 ~/.ssh/***REMOVED*** -``` - -**To get your public key** (run this on your Mac): -```bash -# Generate key if it doesn't exist -if [ ! -f ~/.ssh/thrillwiki_vm ]; then - ssh-keygen -t rsa -b 4096 -f ~/.ssh/thrillwiki_vm -N "" -C "thrillwiki-template-vm-access" -fi - -# Show your public key to copy -cat ~/.ssh/thrillwiki_vm.pub -``` - -Copy this public key and paste it into the template VM's ***REMOVED*** file. - -### Install Required Software -```bash -# Update system -sudo apt update && sudo apt upgrade -y - -# Install essential packages -sudo apt install -y \ - git curl wget build-essential \ - python3 python3-pip python3-venv python3-dev \ - postgresql postgresql-contrib postgresql-client \ - nginx \ - htop tree vim nano \ - software-properties-common - -# Install UV (Python package manager) -curl -LsSf https://astral.sh/uv/install.sh | sh -source ~/.cargo/env - -# Add UV to PATH permanently -echo 'export PATH="$HOME/.cargo/bin:$PATH"' >> ~/.bashrc - -# Configure PostgreSQL -sudo systemctl enable postgresql -sudo systemctl start postgresql - -# Create database user and database -sudo -u postgres createuser thrillwiki -sudo -u postgres createdb thrillwiki -sudo -u postgres psql -c "ALTER USER thrillwiki WITH PASSWORD 'thrillwiki';" -sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE thrillwiki TO thrillwiki;" - -# Configure Nginx -sudo systemctl enable nginx - -# Create ThrillWiki directories -mkdir -p ~/thrillwiki ~/logs ~/backups - -# Set up basic environment -echo "export DJANGO_SETTINGS_MODULE=thrillwiki.settings" >> ~/.bashrc -echo "export DATABASE_URL=[DATABASE-URL-REMOVED] >> ~/.bashrc -``` - -### Pre-install Common Python Packages (Optional) -```bash -# Create a base virtual environment with common packages -cd ~ -python3 -m venv base_venv -source base_venv/bin/activate -pip install --upgrade pip - -# Install common Django packages -pip install \ - django \ - psycopg2-binary \ - gunicorn \ - whitenoise \ - python-decouple \ - pillow \ - requests - -deactivate -``` - -### Clean Up Template -```bash -# Clean package cache -sudo apt autoremove -y -sudo apt autoclean - -# Clear bash history -history -c -history -w - -# Clear any temporary files -sudo find /tmp -type f -delete -sudo find /var/tmp -type f -delete - -# Shutdown the template VM -sudo shutdown now -``` - -## Step 3: Verify Template Setup - -After the template VM shuts down, verify it's ready: - -```bash -# From your Mac, check the template -cd /path/to/your/thrillwiki/project -./scripts/unraid/template-utils.sh check -``` - -## Step 4: Test Template Deployment - -Create a test VM from the template: - -```bash -# Deploy a test VM -./scripts/unraid/template-utils.sh deploy test-thrillwiki-vm - -# Check if it worked -ssh thrillwiki@ "echo 'Template VM working!'" -``` - -## Template VM Configuration Summary - -Your template VM should now have: - -- ✅ **Username**: `thrillwiki` (password: `thrillwiki`) -- ✅ **SSH Access**: Your public key in `/home/thrillwiki/.ssh/***REMOVED***` -- ✅ **Python**: Python 3 with UV package manager -- ✅ **Database**: PostgreSQL with `thrillwiki` user and database -- ✅ **Web Server**: Nginx installed and enabled -- ✅ **Directories**: `~/thrillwiki`, `~/logs`, `~/backups` ready - -## SSH Configuration on Your Mac - -The automation scripts will set this up, but you can also configure manually: - -```bash -# Add to ~/.ssh/config -cat >> ~/.ssh/config << EOF - -# ThrillWiki Template VM -Host thrillwiki-vm - HostName %h - User thrillwiki - IdentityFile ~/.ssh/thrillwiki_vm - StrictHostKeyChecking no - UserKnownHostsFile /dev/null -EOF -``` - -## Next Steps - -Once your template is set up: - -1. **Run the automation setup**: - ```bash - ./scripts/unraid/setup-template-automation.sh - ``` - -2. **Deploy VMs quickly**: - ```bash - ./scripts/unraid/template-utils.sh deploy my-vm-name - ``` - -3. **Enjoy 5-10x faster deployments** (2-5 minutes instead of 20-30 minutes!) - -## Troubleshooting - -### SSH Access Issues -```bash -# Test SSH access to template (when it's running for updates) -ssh -i ~/.ssh/thrillwiki_vm thrillwiki@TEMPLATE_VM_IP - -# If access fails, check: -# 1. Template VM is running -# 2. Public key is in ***REMOVED*** -# 3. Permissions are correct (700 for .ssh, 600 for ***REMOVED***) -``` - -### Template VM Updates -```bash -# Start template VM on Unraid -# SSH in and update: -sudo apt update && sudo apt upgrade -y -~/.cargo/bin/uv --version # Check UV is still working - -# Clean up and shutdown -sudo apt autoremove -y && sudo apt autoclean -history -c && history -w -sudo shutdown now -``` - -### Permission Issues -```bash -# If you get permission errors, ensure thrillwiki user owns everything -sudo chown -R thrillwiki:thrillwiki /home/thrillwiki/ -sudo chmod 700 /home/thrillwiki/.ssh -sudo chmod 600 /home/thrillwiki/.ssh/***REMOVED*** -``` - -Your template is now ready for lightning-fast VM deployments! ⚡ diff --git a/shared/scripts/unraid/autoinstall-user-data.yaml b/shared/scripts/unraid/autoinstall-user-data.yaml deleted file mode 100644 index 60ff8671..00000000 --- a/shared/scripts/unraid/autoinstall-user-data.yaml +++ /dev/null @@ -1,206 +0,0 @@ -#cloud-config -autoinstall: - # version is an Autoinstall required field. - version: 1 - - # Install Ubuntu server packages and ThrillWiki dependencies - packages: - - ubuntu-server - - curl - - wget - - git - - python3 - - python3-pip - - python3-venv - - nginx - - postgresql - - postgresql-contrib - - redis-server - - nodejs - - npm - - build-essential - - ufw - - fail2ban - - htop - - tree - - vim - - tmux - - qemu-guest-agent - - # User creation - identity: - realname: 'ThrillWiki Admin' - username: thrillwiki - # Default [PASSWORD-REMOVED] (change after login) - password: '$6$rounds=4096$saltsalt$[AWS-SECRET-REMOVED]AzpI8g8T14F8VnhXo0sUkZV2NV6/.c77tHgVi34DgbPu.' - hostname: thrillwiki-vm - - locale: en_US.UTF-8 - keyboard: - layout: us - - package_update: true - package_upgrade: true - - # Use direct storage layout (no LVM) - storage: - swap: - size: 0 - layout: - name: direct - - # SSH configuration - ssh: - allow-pw: true - install-server: true - authorized-keys: - - {SSH_PUBLIC_KEY} - - # Network configuration - will be replaced with proper config - network: - version: 2 - ethernets: - enp1s0: - dhcp4: true - dhcp-identifier: mac - - # Commands to run after installation - late-commands: - # Update GRUB - - curtin in-target -- update-grub - - # Enable and start services - - curtin in-target -- systemctl enable qemu-guest-agent - - curtin in-target -- systemctl enable postgresql - - curtin in-target -- systemctl enable redis-server - - curtin in-target -- systemctl enable nginx - - # Configure PostgreSQL - - curtin in-target -- sudo -u postgres createuser -s thrillwiki - - curtin in-target -- sudo -u postgres createdb thrillwiki_db - - curtin in-target -- sudo -u postgres psql -c "ALTER USER thrillwiki PASSWORD 'thrillwiki123';" - - # Configure firewall - - curtin in-target -- ufw allow OpenSSH - - curtin in-target -- ufw allow 'Nginx Full' - - curtin in-target -- ufw --force enable - - # Clone ThrillWiki repository if provided - - curtin in-target -- bash -c 'if [ -n "{GITHUB_REPO}" ]; then cd /home/thrillwiki && git clone "{GITHUB_REPO}" thrillwiki-app && chown -R thrillwiki:thrillwiki thrillwiki-app; fi' - - # Create deployment script - - curtin in-target -- tee /home/thrillwiki/deploy-thrillwiki.sh << 'EOF' -#!/bin/bash -set -e - -echo "=== ThrillWiki Deployment Script ===" - -# Check if repo was cloned -if [ ! -d "/home/thrillwiki/thrillwiki-app" ]; then - echo "Repository not found. Please clone your ThrillWiki repository:" - echo "git clone YOUR_REPO_URL thrillwiki-app" - exit 1 -fi - -cd /home/thrillwiki/thrillwiki-app - -# Create virtual environment -python3 -m venv venv -source venv/bin/activate - -# Install Python dependencies -if [ -f "requirements.txt" ]; then - pip install -r requirements.txt -else - echo "Warning: requirements.txt not found" -fi - -# Install Django if not in requirements -pip install django psycopg2-binary redis celery gunicorn - -# Set up environment variables -cat > ***REMOVED*** << 'ENVEOF' -DEBUG=False -SECRET_KEY=your-secret-key-change-this -DATABASE_URL=[DATABASE-URL-REMOVED] -REDIS_URL=redis://localhost:6379/0 -ALLOWED_HOSTS=localhost,127.0.0.1,thrillwiki-vm -ENVEOF - -# Run Django setup commands -if [ -f "manage.py" ]; then - python manage.py collectstatic --noinput - python manage.py migrate - echo "from django.contrib.auth import get_user_model; User = get_user_model(); User.objects.create_superuser('admin', 'admin@thrillwiki.com', 'thrillwiki123') if not User.objects.filter(username='admin').exists() else None" | python manage.py shell -fi - -# Configure Nginx -sudo tee /etc/nginx/sites-available/thrillwiki << 'NGINXEOF' -server { - listen 80; - server_name _; - - location /static/ { - alias /home/thrillwiki/thrillwiki-app/staticfiles/; - } - - location /media/ { - alias /home/thrillwiki/thrillwiki-app/media/; - } - - location / { - proxy_pass http://127.0.0.1:8000; - proxy_set_header Host $host; - proxy_set_header X-Real-IP $remote_addr; - proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; - proxy_set_header X-Forwarded-Proto $scheme; - } -} -NGINXEOF - -# Enable Nginx site -sudo ln -sf /etc/nginx/sites-available/thrillwiki /etc/nginx/sites-enabled/ -sudo rm -f /etc/nginx/sites-enabled/default -sudo systemctl reload nginx - -# Create systemd service for Django -sudo tee /etc/systemd/system/thrillwiki.service << 'SERVICEEOF' -[Unit] -Description=ThrillWiki Django App -After=network.target - -[Service] -User=thrillwiki -Group=thrillwiki -[AWS-SECRET-REMOVED]wiki-app -[AWS-SECRET-REMOVED]wiki-app/venv/bin -ExecStart=/home/thrillwiki/thrillwiki-app/venv/bin/gunicorn --workers 3 --bind 127.0.0.1:8000 thrillwiki.wsgi:application -Restart=always - -[Install] -WantedBy=multi-user.target -SERVICEEOF - -# Enable and start ThrillWiki service -sudo systemctl daemon-reload -sudo systemctl enable thrillwiki -sudo systemctl start thrillwiki - -echo "=== ThrillWiki deployment complete! ===" -echo "Access your application at: http://$(hostname -I | awk '{print $1}')" -echo "Django Admin: http://$(hostname -I | awk '{print $1}')/admin" -echo "Default superuser: admin / thrillwiki123" -echo "" -echo "Important: Change default passwords!" -EOF - - # Make deployment script executable - - curtin in-target -- chmod +x /home/thrillwiki/deploy-thrillwiki.sh - - curtin in-target -- chown thrillwiki:thrillwiki /home/thrillwiki/deploy-thrillwiki.sh - - # Clean up - - curtin in-target -- apt-get autoremove -y - - curtin in-target -- apt-get autoclean - - # Reboot after installation - shutdown: reboot diff --git a/shared/scripts/unraid/cloud-init-template.yaml b/shared/scripts/unraid/cloud-init-template.yaml deleted file mode 100644 index 2ac6a66c..00000000 --- a/shared/scripts/unraid/cloud-init-template.yaml +++ /dev/null @@ -1,62 +0,0 @@ -#cloud-config -# Ubuntu autoinstall configuration -autoinstall: - version: 1 - locale: en_US.UTF-8 - keyboard: - layout: us - network: - version: 2 - ethernets: - ens3: - dhcp4: true - enp1s0: - dhcp4: true - eth0: - dhcp4: true - ssh: - install-server: true - authorized-keys: - - {SSH_PUBLIC_KEY} - allow-pw: false - storage: - layout: - name: lvm - identity: - hostname: thrillwiki-vm - username: ubuntu - password: "$6$rounds=4096$salt$hash" # disabled - ssh key only - packages: - - openssh-server - - curl - - git - - python3 - - python3-pip - - python3-venv - - build-essential - - postgresql - - postgresql-contrib - - nginx - - nodejs - - npm - - wget - - ca-certificates - - openssl - - dnsutils - - net-tools - early-commands: - - systemctl stop ssh - late-commands: - # Enable sudo for ubuntu user - - echo 'ubuntu ALL=(ALL) NOPASSWD:ALL' > /target/etc/sudoers.d/ubuntu - # Install uv Python package manager - - chroot /target su - ubuntu -c 'curl -LsSf https://astral.sh/uv/install.sh | sh || pip3 install uv' - # Add uv to PATH - - chroot /target su - ubuntu -c 'echo "export PATH=\$HOME/.cargo/bin:\$PATH" >> /home/ubuntu/.bashrc' - # Clone ThrillWiki repository - - chroot /target su - ubuntu -c 'cd /home/ubuntu && git clone {GITHUB_REPO} thrillwiki' - # Setup systemd service for ThrillWiki - - systemctl enable postgresql - - systemctl enable nginx - - shutdown: reboot diff --git a/shared/scripts/unraid/deploy-thrillwiki-template.sh b/shared/scripts/unraid/deploy-thrillwiki-template.sh deleted file mode 100644 index a16c4c55..00000000 --- a/shared/scripts/unraid/deploy-thrillwiki-template.sh +++ /dev/null @@ -1,451 +0,0 @@ -#!/bin/bash -# -# ThrillWiki Template-Based Deployment Script -# Optimized for VMs deployed from templates that already have basic setup -# - -# Function to log messages with timestamp -log() { - echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a /home/ubuntu/thrillwiki-deploy.log -} - -# Function to check if a command exists -command_exists() { - command -v "$1" >/dev/null 2>&1 -} - -# Function to wait for network connectivity -wait_for_network() { - log "Waiting for network connectivity..." - local max_attempts=20 # Reduced from 30 since template VMs boot faster - local attempt=1 - while [ $attempt -le $max_attempts ]; do - if curl -s --connect-timeout 5 https://github.com >/dev/null 2>&1; then - log "Network connectivity confirmed" - return 0 - fi - log "Network attempt $attempt/$max_attempts failed, retrying in 5 seconds..." - sleep 5 # Reduced from 10 since template VMs should have faster networking - attempt=$((attempt + 1)) - done - log "WARNING: Network connectivity check failed after $max_attempts attempts" - return 1 -} - -# Function to update system packages (lighter since template should be recent) -update_system() { - log "Updating system packages..." - - # Quick update - template should already have most packages - sudo apt update || log "WARNING: apt update failed" - - # Only upgrade security packages to save time - sudo apt list --upgradable 2>/dev/null | grep -q security && { - log "Installing security updates..." - sudo apt upgrade -y --with-new-pkgs -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" || log "WARNING: Security updates failed" - } || log "No security updates needed" -} - -# Function to setup Python environment with template optimizations -setup_python_env() { - log "Setting up Python environment..." - - # Check if uv is already available (should be in template) - export PATH="/home/ubuntu/.cargo/bin:$PATH" - - if command_exists uv; then - log "Using existing uv installation from template" - uv --version - else - log "Installing uv (not found in template)..." - if wait_for_network; then - curl -LsSf --connect-timeout 30 --retry 2 --retry-delay 5 https://astral.sh/uv/install.sh | sh - export PATH="/home/ubuntu/.cargo/bin:$PATH" - else - log "WARNING: Network not available, falling back to pip" - fi - fi - - # Setup virtual environment - if command_exists uv; then - log "Creating virtual environment with uv..." - if uv venv .venv && source .venv/bin/activate; then - if uv sync; then - log "Successfully set up environment with uv" - return 0 - else - log "uv sync failed, falling back to pip" - fi - else - log "uv venv failed, falling back to pip" - fi - fi - - # Fallback to pip with venv - log "Setting up environment with pip and venv" - if python3 -m venv .venv && source .venv/bin/activate; then - pip install --upgrade pip || log "WARNING: Failed to upgrade pip" - - # Try different dependency installation methods - if [ -f pyproject.toml ]; then - log "Installing dependencies from pyproject.toml" - if pip install -e . || pip install .; then - log "Successfully installed dependencies from pyproject.toml" - return 0 - else - log "Failed to install from pyproject.toml" - fi - fi - - if [ -f requirements.txt ]; then - log "Installing dependencies from requirements.txt" - if pip install -r requirements.txt; then - log "Successfully installed dependencies from requirements.txt" - return 0 - else - log "Failed to install from requirements.txt" - fi - fi - - # Last resort: install common Django packages - log "Installing basic Django packages as fallback" - pip install django psycopg2-binary gunicorn || log "WARNING: Failed to install basic packages" - else - log "ERROR: Failed to create virtual environment" - return 1 - fi -} - -# Function to setup database (should already exist in template) -setup_database() { - log "Setting up PostgreSQL database..." - - # Check if PostgreSQL is already running (should be in template) - if sudo systemctl is-active --quiet postgresql; then - log "PostgreSQL is already running" - else - log "Starting PostgreSQL service..." - sudo systemctl start postgresql || { - log "Failed to start PostgreSQL, trying alternative methods" - sudo service postgresql start || { - log "ERROR: Could not start PostgreSQL" - return 1 - } - } - fi - - # Check if database and user already exist (may be in template) - if sudo -u postgres psql -lqt | cut -d \| -f 1 | grep -qw thrillwiki_production; then - log "Database 'thrillwiki_production' already exists" - else - log "Creating database 'thrillwiki_production'..." - sudo -u postgres createdb thrillwiki_production || { - log "ERROR: Failed to create database" - return 1 - } - fi - - # Create/update database user - if sudo -u postgres psql -c "SELECT 1 FROM pg_user WHERE usename = 'ubuntu'" | grep -q 1; then - log "Database user 'ubuntu' already exists" - else - sudo -u postgres createuser ubuntu || log "WARNING: Failed to create user (may already exist)" - fi - - # Grant permissions - sudo -u postgres psql -c "ALTER USER ubuntu WITH SUPERUSER;" || { - log "WARNING: Failed to grant superuser privileges, trying alternative permissions" - sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE thrillwiki_production TO ubuntu;" || log "WARNING: Failed to grant database privileges" - } - - log "Database setup completed" -} - -# Function to run Django commands with fallbacks -run_django_commands() { - log "Running Django management commands..." - - # Ensure we're in the virtual environment - if [ ! -d ".venv" ] || ! source .venv/bin/activate; then - log "WARNING: Virtual environment not found or failed to activate" - # Try to run without venv activation - fi - - # Function to run a Django command with fallbacks - run_django_cmd() { - local cmd="$1" - local description="$2" - - log "Running: $description" - - # Try uv run first - if command_exists uv && uv run manage.py $cmd; then - log "Successfully ran '$cmd' with uv" - return 0 - fi - - # Try python in venv - if python manage.py $cmd; then - log "Successfully ran '$cmd' with python" - return 0 - fi - - # Try python3 - if python3 manage.py $cmd; then - log "Successfully ran '$cmd' with python3" - return 0 - fi - - log "WARNING: Failed to run '$cmd'" - return 1 - } - - # Run migrations - run_django_cmd "migrate" "Database migrations" || log "WARNING: Database migration failed" - - # Collect static files - run_django_cmd "collectstatic --noinput" "Static files collection" || log "WARNING: Static files collection failed" - - # Build Tailwind CSS (if available) - if run_django_cmd "tailwind build" "Tailwind CSS build"; then - log "Tailwind CSS built successfully" - else - log "Tailwind CSS build not available or failed - this is optional" - fi -} - -# Function to setup systemd services (may already exist in template) -setup_services() { - log "Setting up systemd services..." - - # Check if systemd service files exist - if [ -f scripts/systemd/thrillwiki.service ]; then - log "Copying ThrillWiki systemd service..." - sudo cp scripts/systemd/thrillwiki.service /etc/systemd/system/ || { - log "Failed to copy thrillwiki.service, creating basic service" - create_basic_service - } - else - log "Systemd service file not found, creating basic service" - create_basic_service - fi - - # Copy webhook service if available - if [ -f scripts/systemd/thrillwiki-webhook.service ]; then - sudo cp scripts/systemd/thrillwiki-webhook.service /etc/systemd/system/ || { - log "Failed to copy webhook service, skipping" - } - else - log "Webhook service file not found, skipping" - fi - - # Update service files with correct paths - if [ -f /etc/systemd/system/thrillwiki.service ]; then - sudo sed -i "s|/opt/thrillwiki|/home/ubuntu/thrillwiki|g" /etc/systemd/system/thrillwiki.service - sudo sed -i "s|User=thrillwiki|User=ubuntu|g" /etc/systemd/system/thrillwiki.service - fi - - if [ -f /etc/systemd/system/thrillwiki-webhook.service ]; then - sudo sed -i "s|/opt/thrillwiki|/home/ubuntu/thrillwiki|g" /etc/systemd/system/thrillwiki-webhook.service - sudo sed -i "s|User=thrillwiki|User=ubuntu|g" /etc/systemd/system/thrillwiki-webhook.service - fi - - # Reload systemd and start services - sudo systemctl daemon-reload - - # Enable and start main service - if sudo systemctl enable thrillwiki 2>/dev/null; then - log "ThrillWiki service enabled" - if sudo systemctl start thrillwiki; then - log "ThrillWiki service started successfully" - else - log "WARNING: Failed to start ThrillWiki service" - sudo systemctl status thrillwiki --no-pager || true - fi - else - log "WARNING: Failed to enable ThrillWiki service" - fi - - # Try to start webhook service if it exists - if [ -f /etc/systemd/system/thrillwiki-webhook.service ]; then - sudo systemctl enable thrillwiki-webhook 2>/dev/null && sudo systemctl start thrillwiki-webhook || { - log "WARNING: Failed to start webhook service" - } - fi -} - -# Function to create a basic systemd service if none exists -create_basic_service() { - log "Creating basic systemd service..." - - sudo tee /etc/systemd/system/thrillwiki.service > /dev/null << 'SERVICE_EOF' -[Unit] -Description=ThrillWiki Django Application -After=network.target postgresql.service -Wants=postgresql.service - -[Service] -Type=exec -User=ubuntu -Group=ubuntu -[AWS-SECRET-REMOVED] -[AWS-SECRET-REMOVED]/.venv/bin:/home/ubuntu/.cargo/bin:/usr/local/bin:/usr/bin:/bin -ExecStart=/home/ubuntu/thrillwiki/.venv/bin/python manage.py runserver 0.0.0.0:8000 -Restart=always -RestartSec=3 - -[Install] -WantedBy=multi-user.target -SERVICE_EOF - - log "Basic systemd service created" -} - -# Function to setup web server (may already be configured in template) -setup_webserver() { - log "Setting up web server..." - - # Check if nginx is installed and running - if command_exists nginx; then - if ! sudo systemctl is-active --quiet nginx; then - log "Starting nginx..." - sudo systemctl start nginx || log "WARNING: Failed to start nginx" - fi - - # Create basic nginx config if none exists - if [ ! -f /etc/nginx/sites-available/thrillwiki ]; then - log "Creating nginx configuration..." - sudo tee /etc/nginx/sites-available/thrillwiki > /dev/null << 'NGINX_EOF' -server { - listen 80; - server_name _; - - location /static/ { - alias /home/ubuntu/thrillwiki/staticfiles/; - } - - location /media/ { - alias /home/ubuntu/thrillwiki/media/; - } - - location / { - proxy_pass http://127.0.0.1:8000; - proxy_set_header Host $host; - proxy_set_header X-Real-IP $remote_addr; - proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; - proxy_set_header X-Forwarded-Proto $scheme; - } -} -NGINX_EOF - - # Enable the site - sudo ln -sf /etc/nginx/sites-available/thrillwiki /etc/nginx/sites-enabled/ || log "WARNING: Failed to enable nginx site" - sudo nginx -t && sudo systemctl reload nginx || log "WARNING: nginx configuration test failed" - else - log "nginx configuration already exists" - fi - else - log "nginx not installed, ThrillWiki will run on port 8000 directly" - fi -} - -# Main deployment function -main() { - log "Starting ThrillWiki template-based deployment..." - - # Shorter wait time since template VMs boot faster - log "Waiting for system to be ready..." - sleep 10 - - # Wait for network - wait_for_network || log "WARNING: Network check failed, continuing anyway" - - # Clone or update repository - log "Setting up ThrillWiki repository..." - export GITHUB_TOKEN=$(cat /home/ubuntu/.github-token 2>/dev/null || echo "") - - # Get the GitHub repository from environment or parameter - GITHUB_REPO="${1:-}" - if [ -z "$GITHUB_REPO" ]; then - log "ERROR: GitHub repository not specified" - return 1 - fi - - if [ -d "/home/ubuntu/thrillwiki" ]; then - log "ThrillWiki directory already exists, updating..." - cd /home/ubuntu/thrillwiki - git pull || log "WARNING: Failed to update repository" - else - if [ -n "$GITHUB_TOKEN" ]; then - log "Cloning with GitHub token..." - git clone https://$GITHUB_TOKEN@github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || { - log "Failed to clone with token, trying without..." - git clone https://github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || { - log "ERROR: Failed to clone repository" - return 1 - } - } - else - log "Cloning without GitHub token..." - git clone https://github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || { - log "ERROR: Failed to clone repository" - return 1 - } - fi - cd /home/ubuntu/thrillwiki - fi - - # Update system (lighter for template VMs) - update_system - - # Setup Python environment - setup_python_env || { - log "ERROR: Failed to set up Python environment" - return 1 - } - - # Setup environment file - log "Setting up environment configuration..." - if [ -f ***REMOVED***.example ]; then - cp ***REMOVED***.example ***REMOVED*** || log "WARNING: Failed to copy ***REMOVED***.example" - fi - - # Update ***REMOVED*** with production settings - { - echo "DEBUG=False" - echo "DATABASE_URL=postgresql://ubuntu@localhost/thrillwiki_production" - echo "ALLOWED_HOSTS=*" - echo "STATIC_[AWS-SECRET-REMOVED]" - } >> ***REMOVED*** - - # Setup database - setup_database || { - log "ERROR: Database setup failed" - return 1 - } - - # Run Django commands - run_django_commands - - # Setup systemd services - setup_services - - # Setup web server - setup_webserver - - log "ThrillWiki template-based deployment completed!" - log "Application should be available at http://$(hostname -I | awk '{print $1}'):8000" - log "Logs are available at /home/ubuntu/thrillwiki-deploy.log" -} - -# Run main function and capture any errors -main "$@" 2>&1 | tee -a /home/ubuntu/thrillwiki-deploy.log -exit_code=${PIPESTATUS[0]} - -if [ $exit_code -eq 0 ]; then - log "Template-based deployment completed successfully!" -else - log "Template-based deployment completed with errors (exit code: $exit_code)" -fi - -exit $exit_code diff --git a/shared/scripts/unraid/deploy-thrillwiki.sh b/shared/scripts/unraid/deploy-thrillwiki.sh deleted file mode 100755 index 45a6d65c..00000000 --- a/shared/scripts/unraid/deploy-thrillwiki.sh +++ /dev/null @@ -1,467 +0,0 @@ -#!/bin/bash - -# Function to log messages with timestamp -log() { - echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a /home/ubuntu/thrillwiki-deploy.log -} - -# Function to check if a command exists -command_exists() { - command -v "$1" >/dev/null 2>&1 -} - -# Function to wait for network connectivity -wait_for_network() { - log "Waiting for network connectivity..." - local max_attempts=30 - local attempt=1 - while [ $attempt -le $max_attempts ]; do - if curl -s --connect-timeout 5 https://github.com >/dev/null 2>&1; then - log "Network connectivity confirmed" - return 0 - fi - log "Network attempt $attempt/$max_attempts failed, retrying in 10 seconds..." - sleep 10 - attempt=$((attempt + 1)) - done - log "WARNING: Network connectivity check failed after $max_attempts attempts" - return 1 -} - -# Function to install uv if not available -install_uv() { - log "Checking for uv installation..." - export PATH="/home/ubuntu/.cargo/bin:$PATH" - - if command_exists uv; then - log "uv is already available" - return 0 - fi - - log "Installing uv..." - - # Wait for network connectivity first - wait_for_network || { - log "Network not available, skipping uv installation" - return 1 - } - - # Try to install uv with multiple attempts - local max_attempts=3 - local attempt=1 - while [ $attempt -le $max_attempts ]; do - log "uv installation attempt $attempt/$max_attempts" - - if curl -LsSf --connect-timeout 30 --retry 2 --retry-delay 5 https://astral.sh/uv/install.sh | sh; then - # Reload PATH - export PATH="/home/ubuntu/.cargo/bin:$PATH" - if command_exists uv; then - log "uv installed successfully" - return 0 - else - log "uv installation completed but command not found, checking PATH..." - # Try to source the shell profile to get updated PATH - if [ -f /home/ubuntu/.bashrc ]; then - source /home/ubuntu/.bashrc 2>/dev/null || true - fi - if [ -f /home/ubuntu/.cargo/env ]; then - source /home/ubuntu/.cargo/env 2>/dev/null || true - fi - export PATH="/home/ubuntu/.cargo/bin:$PATH" - if command_exists uv; then - log "uv is now available after PATH update" - return 0 - fi - fi - fi - - log "uv installation attempt $attempt failed" - attempt=$((attempt + 1)) - [ $attempt -le $max_attempts ] && sleep 10 - done - - log "Failed to install uv after $max_attempts attempts, will use pip fallback" - return 1 -} - -# Function to setup Python environment with fallbacks -setup_python_env() { - log "Setting up Python environment..." - - # Try to install uv first if not available - install_uv - - export PATH="/home/ubuntu/.cargo/bin:$PATH" - - # Try uv first - if command_exists uv; then - log "Using uv for Python environment management" - if uv venv .venv && source .venv/bin/activate; then - if uv sync; then - log "Successfully set up environment with uv" - return 0 - else - log "uv sync failed, falling back to pip" - fi - else - log "uv venv failed, falling back to pip" - fi - else - log "uv not available, using pip" - fi - - # Fallback to pip with venv - log "Setting up environment with pip and venv" - if python3 -m venv .venv && source .venv/bin/activate; then - pip install --upgrade pip || log "WARNING: Failed to upgrade pip" - - # Try different dependency installation methods - if [ -f pyproject.toml ]; then - log "Installing dependencies from pyproject.toml" - if pip install -e . || pip install .; then - log "Successfully installed dependencies from pyproject.toml" - return 0 - else - log "Failed to install from pyproject.toml" - fi - fi - - if [ -f requirements.txt ]; then - log "Installing dependencies from requirements.txt" - if pip install -r requirements.txt; then - log "Successfully installed dependencies from requirements.txt" - return 0 - else - log "Failed to install from requirements.txt" - fi - fi - - # Last resort: install common Django packages - log "Installing basic Django packages as fallback" - pip install django psycopg2-binary gunicorn || log "WARNING: Failed to install basic packages" - else - log "ERROR: Failed to create virtual environment" - return 1 - fi -} - -# Function to setup database with fallbacks -setup_database() { - log "Setting up PostgreSQL database..." - - # Ensure PostgreSQL is running - if ! sudo systemctl is-active --quiet postgresql; then - log "Starting PostgreSQL service..." - sudo systemctl start postgresql || { - log "Failed to start PostgreSQL, trying alternative methods" - sudo service postgresql start || { - log "ERROR: Could not start PostgreSQL" - return 1 - } - } - fi - - # Create database user and database with error handling - if sudo -u postgres createuser ubuntu 2>/dev/null || sudo -u postgres psql -c "SELECT 1 FROM pg_user WHERE usename = 'ubuntu'" | grep -q 1; then - log "Database user 'ubuntu' created or already exists" - else - log "ERROR: Failed to create database user" - return 1 - fi - - if sudo -u postgres createdb thrillwiki_production 2>/dev/null || sudo -u postgres psql -lqt | cut -d \| -f 1 | grep -qw thrillwiki_production; then - log "Database 'thrillwiki_production' created or already exists" - else - log "ERROR: Failed to create database" - return 1 - fi - - # Grant permissions - sudo -u postgres psql -c "ALTER USER ubuntu WITH SUPERUSER;" || { - log "WARNING: Failed to grant superuser privileges, trying alternative permissions" - sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE thrillwiki_production TO ubuntu;" || log "WARNING: Failed to grant database privileges" - } - - log "Database setup completed" -} - -# Function to run Django commands with fallbacks -run_django_commands() { - log "Running Django management commands..." - - # Ensure we're in the virtual environment - if [ ! -d ".venv" ] || ! source .venv/bin/activate; then - log "WARNING: Virtual environment not found or failed to activate" - # Try to run without venv activation - fi - - # Function to run a Django command with fallbacks - run_django_cmd() { - local cmd="$1" - local description="$2" - - log "Running: $description" - - # Try uv run first - if command_exists uv && uv run manage.py $cmd; then - log "Successfully ran '$cmd' with uv" - return 0 - fi - - # Try python in venv - if python manage.py $cmd; then - log "Successfully ran '$cmd' with python" - return 0 - fi - - # Try python3 - if python3 manage.py $cmd; then - log "Successfully ran '$cmd' with python3" - return 0 - fi - - log "WARNING: Failed to run '$cmd'" - return 1 - } - - # Run migrations - run_django_cmd "migrate" "Database migrations" || log "WARNING: Database migration failed" - - # Collect static files - run_django_cmd "collectstatic --noinput" "Static files collection" || log "WARNING: Static files collection failed" - - # Build Tailwind CSS (if available) - if run_django_cmd "tailwind build" "Tailwind CSS build"; then - log "Tailwind CSS built successfully" - else - log "Tailwind CSS build not available or failed - this is optional" - fi -} - -# Function to setup systemd services with fallbacks -setup_services() { - log "Setting up systemd services..." - - # Check if systemd service files exist - if [ -f scripts/systemd/thrillwiki.service ]; then - sudo cp scripts/systemd/thrillwiki.service /etc/systemd/system/ || { - log "Failed to copy thrillwiki.service, creating basic service" - create_basic_service - } - else - log "Systemd service file not found, creating basic service" - create_basic_service - fi - - if [ -f scripts/systemd/thrillwiki-webhook.service ]; then - sudo cp scripts/systemd/thrillwiki-webhook.service /etc/systemd/system/ || { - log "Failed to copy webhook service, skipping" - } - else - log "Webhook service file not found, skipping" - fi - - # Update service files with correct paths - if [ -f /etc/systemd/system/thrillwiki.service ]; then - sudo sed -i "s|/opt/thrillwiki|/home/ubuntu/thrillwiki|g" /etc/systemd/system/thrillwiki.service - sudo sed -i "s|User=thrillwiki|User=ubuntu|g" /etc/systemd/system/thrillwiki.service - fi - - if [ -f /etc/systemd/system/thrillwiki-webhook.service ]; then - sudo sed -i "s|/opt/thrillwiki|/home/ubuntu/thrillwiki|g" /etc/systemd/system/thrillwiki-webhook.service - sudo sed -i "s|User=thrillwiki|User=ubuntu|g" /etc/systemd/system/thrillwiki-webhook.service - fi - - # Reload systemd and start services - sudo systemctl daemon-reload - - if sudo systemctl enable thrillwiki 2>/dev/null; then - log "ThrillWiki service enabled" - if sudo systemctl start thrillwiki; then - log "ThrillWiki service started successfully" - else - log "WARNING: Failed to start ThrillWiki service" - sudo systemctl status thrillwiki --no-pager || true - fi - else - log "WARNING: Failed to enable ThrillWiki service" - fi - - # Try to start webhook service if it exists - if [ -f /etc/systemd/system/thrillwiki-webhook.service ]; then - sudo systemctl enable thrillwiki-webhook 2>/dev/null && sudo systemctl start thrillwiki-webhook || { - log "WARNING: Failed to start webhook service" - } - fi -} - -# Function to create a basic systemd service if none exists -create_basic_service() { - log "Creating basic systemd service..." - - sudo tee /etc/systemd/system/thrillwiki.service > /dev/null << 'SERVICE_EOF' -[Unit] -Description=ThrillWiki Django Application -After=network.target postgresql.service -Wants=postgresql.service - -[Service] -Type=exec -User=ubuntu -Group=ubuntu -[AWS-SECRET-REMOVED] -[AWS-SECRET-REMOVED]/.venv/bin:/home/ubuntu/.cargo/bin:/usr/local/bin:/usr/bin:/bin -ExecStart=/home/ubuntu/thrillwiki/.venv/bin/python manage.py runserver 0.0.0.0:8000 -Restart=always -RestartSec=3 - -[Install] -WantedBy=multi-user.target -SERVICE_EOF - - log "Basic systemd service created" -} - -# Function to setup web server (nginx) with fallbacks -setup_webserver() { - log "Setting up web server..." - - # Check if nginx is installed and running - if command_exists nginx; then - if ! sudo systemctl is-active --quiet nginx; then - log "Starting nginx..." - sudo systemctl start nginx || log "WARNING: Failed to start nginx" - fi - - # Create basic nginx config if none exists - if [ ! -f /etc/nginx/sites-available/thrillwiki ]; then - log "Creating nginx configuration..." - sudo tee /etc/nginx/sites-available/thrillwiki > /dev/null << 'NGINX_EOF' -server { - listen 80; - server_name _; - - location /static/ { - alias /home/ubuntu/thrillwiki/staticfiles/; - } - - location /media/ { - alias /home/ubuntu/thrillwiki/media/; - } - - location / { - proxy_pass http://127.0.0.1:8000; - proxy_set_header Host $host; - proxy_set_header X-Real-IP $remote_addr; - proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; - proxy_set_header X-Forwarded-Proto $scheme; - } -} -NGINX_EOF - - # Enable the site - sudo ln -sf /etc/nginx/sites-available/thrillwiki /etc/nginx/sites-enabled/ || log "WARNING: Failed to enable nginx site" - sudo nginx -t && sudo systemctl reload nginx || log "WARNING: nginx configuration test failed" - fi - else - log "nginx not installed, ThrillWiki will run on port 8000 directly" - fi -} - -# Main deployment function -main() { - log "Starting ThrillWiki deployment..." - - # Wait for system to be ready - log "Waiting for system to be ready..." - sleep 30 - - # Wait for network - wait_for_network || log "WARNING: Network check failed, continuing anyway" - - # Clone repository - log "Cloning ThrillWiki repository..." - export GITHUB_TOKEN=$(cat /home/ubuntu/.github-token 2>/dev/null || echo "") - - # Get the GitHub repository from environment or parameter - GITHUB_REPO="${1:-}" - if [ -z "$GITHUB_REPO" ]; then - log "ERROR: GitHub repository not specified" - return 1 - fi - - if [ -d "/home/ubuntu/thrillwiki" ]; then - log "ThrillWiki directory already exists, updating..." - cd /home/ubuntu/thrillwiki - git pull || log "WARNING: Failed to update repository" - else - if [ -n "$GITHUB_TOKEN" ]; then - log "Cloning with GitHub token..." - git clone https://$GITHUB_TOKEN@github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || { - log "Failed to clone with token, trying without..." - git clone https://github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || { - log "ERROR: Failed to clone repository" - return 1 - } - } - else - log "Cloning without GitHub token..." - git clone https://github.com/$GITHUB_REPO /home/ubuntu/thrillwiki || { - log "ERROR: Failed to clone repository" - return 1 - } - fi - cd /home/ubuntu/thrillwiki - fi - - # Setup Python environment - setup_python_env || { - log "ERROR: Failed to set up Python environment" - return 1 - } - - # Setup environment file - log "Setting up environment configuration..." - if [ -f ***REMOVED***.example ]; then - cp ***REMOVED***.example ***REMOVED*** || log "WARNING: Failed to copy ***REMOVED***.example" - fi - - # Update ***REMOVED*** with production settings - { - echo "DEBUG=False" - echo "DATABASE_URL=postgresql://ubuntu@localhost/thrillwiki_production" - echo "ALLOWED_HOSTS=*" - echo "STATIC_[AWS-SECRET-REMOVED]" - } >> ***REMOVED*** - - # Setup database - setup_database || { - log "ERROR: Database setup failed" - return 1 - } - - # Run Django commands - run_django_commands - - # Setup systemd services - setup_services - - # Setup web server - setup_webserver - - log "ThrillWiki deployment completed!" - log "Application should be available at http://$(hostname -I | awk '{print $1}'):8000" - log "Logs are available at /home/ubuntu/thrillwiki-deploy.log" -} - -# Run main function and capture any errors -main "$@" 2>&1 | tee -a /home/ubuntu/thrillwiki-deploy.log -exit_code=${PIPESTATUS[0]} - -if [ $exit_code -eq 0 ]; then - log "Deployment completed successfully!" -else - log "Deployment completed with errors (exit code: $exit_code)" -fi - -exit $exit_code diff --git a/shared/scripts/unraid/example-non-interactive.sh b/shared/scripts/unraid/example-non-interactive.sh deleted file mode 100755 index e7c2c746..00000000 --- a/shared/scripts/unraid/example-non-interactive.sh +++ /dev/null @@ -1,39 +0,0 @@ -#!/bin/bash - -# Example: How to use non-interactive mode for ThrillWiki setup -# -# This script shows how to set up environment variables for non-interactive mode -# and run the automation without any user prompts. - -echo "🤖 ThrillWiki Non-Interactive Setup Example" -echo "[AWS-SECRET-REMOVED]==" - -# Set required environment variables for non-interactive mode -# These replace the interactive prompts - -# Unraid password (REQUIRED) -export UNRAID_PASSWORD="your_unraid_password_here" - -# GitHub token (REQUIRED if using GitHub API) -export GITHUB_TOKEN="your_github_token_here" - -# Webhook secret (REQUIRED if webhooks enabled) -export WEBHOOK_SECRET="your_webhook_secret_here" - -echo "✅ Environment variables set" -echo "📋 Configuration summary:" -echo " - UNRAID_PASSWORD: [HIDDEN]" -echo " - GITHUB_TOKEN: [HIDDEN]" -echo " - WEBHOOK_SECRET: [HIDDEN]" -echo - -echo "🚀 Starting non-interactive setup..." -echo "This will use saved configuration and the environment variables above" -echo - -# Run the setup script in non-interactive mode -./setup-complete-automation.sh -y - -echo -echo "✨ Non-interactive setup completed!" -echo "📝 Note: This example script should be customized with your actual credentials" diff --git a/shared/scripts/unraid/iso_builder.py b/shared/scripts/unraid/iso_builder.py deleted file mode 100644 index cbfcb548..00000000 --- a/shared/scripts/unraid/iso_builder.py +++ /dev/null @@ -1,531 +0,0 @@ -#!/usr/bin/env python3 -""" -Ubuntu ISO Builder for Autoinstall -Follows the Ubuntu autoinstall guide exactly: -1. Download Ubuntu ISO -2. Extract with 7zip equivalent -3. Modify GRUB configuration -4. Add server/ directory with autoinstall config -5. Rebuild ISO with xorriso equivalent -""" - -import os -import logging -import subprocess -import tempfile -import shutil -import urllib.request -from pathlib import Path -from typing import Optional - -logger = logging.getLogger(__name__) - -# Ubuntu ISO URLs with fallbacks -UBUNTU_MIRRORS = [ - "https://releases.ubuntu.com", # Official Ubuntu releases (primary) - "http://archive.ubuntu.com/ubuntu-releases", # Official archive - "http://mirror.csclub.uwaterloo.ca/ubuntu-releases", # University of Waterloo - "http://mirror.math.princeton.edu/pub/ubuntu-releases", # Princeton mirror -] -UBUNTU_24_04_ISO = "24.04/ubuntu-24.04.3-live-server-amd64.iso" -UBUNTU_22_04_ISO = "22.04/ubuntu-22.04.3-live-server-amd64.iso" - - -def get_latest_ubuntu_server_iso(version: str) -> Optional[str]: - """Dynamically find the latest point release for a given Ubuntu version.""" - try: - import re - - for mirror in UBUNTU_MIRRORS: - try: - url = f"{mirror}/{version}/" - response = urllib.request.urlopen(url, timeout=10) - content = response.read().decode("utf-8") - - # Find all server ISO files for this version - pattern = rf"ubuntu-{ - re.escape(version)}\.[0-9]+-live-server-amd64\.iso" - matches = re.findall(pattern, content) - - if matches: - # Sort by version and return the latest - matches.sort(key=lambda x: [int(n) for n in re.findall(r"\d+", x)]) - latest_iso = matches[-1] - return f"{version}/{latest_iso}" - except Exception as e: - logger.debug(f"Failed to check {mirror}/{version}/: {e}") - continue - - logger.warning(f"Could not dynamically detect latest ISO for Ubuntu {version}") - return None - - except Exception as e: - logger.error(f"Error in dynamic ISO detection: {e}") - return None - - -class UbuntuISOBuilder: - """Builds modified Ubuntu ISO with autoinstall configuration.""" - - def __init__(self, vm_name: str, work_dir: Optional[str] = None): - self.vm_name = vm_name - self.work_dir = ( - Path(work_dir) - if work_dir - else Path(tempfile.mkdtemp(prefix="ubuntu-autoinstall-")) - ) - self.source_files_dir = self.work_dir / "source-files" - self.boot_dir = self.work_dir / "BOOT" - self.server_dir = self.source_files_dir / "server" - self.grub_cfg_path = self.source_files_dir / "boot" / "grub" / "grub.cfg" - - # Ensure directories exist - self.work_dir.mkdir(exist_ok=True, parents=True) - self.source_files_dir.mkdir(exist_ok=True, parents=True) - - def check_tools(self) -> bool: - """Check if required tools are available.""" - - # Check for 7zip equivalent (p7zip on macOS/Linux) - if not shutil.which("7z") and not shutil.which("7za"): - logger.error( - "7zip not found. Install with: brew install p7zip (macOS) or apt install p7zip-full (Ubuntu)" - ) - return False - - # Check for xorriso equivalent - if ( - not shutil.which("xorriso") - and not shutil.which("mkisofs") - and not shutil.which("hdiutil") - ): - logger.error( - "No ISO creation tool found. Install xorriso, mkisofs, or use macOS hdiutil" - ) - return False - - return True - - def download_ubuntu_iso(self, version: str = "24.04") -> Path: - """Download Ubuntu ISO if not already present, trying multiple mirrors.""" - iso_filename = f"ubuntu-{version}-live-server-amd64.iso" - iso_path = self.work_dir / iso_filename - - if iso_path.exists(): - logger.info(f"Ubuntu ISO already exists: {iso_path}") - return iso_path - - if version == "24.04": - iso_subpath = UBUNTU_24_04_ISO - elif version == "22.04": - iso_subpath = UBUNTU_22_04_ISO - else: - raise ValueError(f"Unsupported Ubuntu version: {version}") - - # Try each mirror until one works - last_error = None - for mirror in UBUNTU_MIRRORS: - iso_url = f"{mirror}/{iso_subpath}" - logger.info(f"Trying to download Ubuntu {version} ISO from {iso_url}") - - try: - # Try downloading from this mirror - urllib.request.urlretrieve(iso_url, iso_path) - logger.info( - f"✅ Ubuntu ISO downloaded successfully from {mirror}: {iso_path}" - ) - return iso_path - except Exception as e: - last_error = e - logger.warning(f"Failed to download from {mirror}: {e}") - # Remove partial download if it exists - if iso_path.exists(): - iso_path.unlink() - continue - - # If we get here, all mirrors failed - logger.error( - f"Failed to download Ubuntu ISO from all mirrors. Last error: {last_error}" - ) - raise last_error - - def extract_iso(self, iso_path: Path) -> bool: - """Extract Ubuntu ISO following the guide.""" - logger.info(f"Extracting ISO: {iso_path}") - - # Use 7z to extract ISO - seven_zip_cmd = "7z" if shutil.which("7z") else "7za" - - try: - # Extract ISO: 7z -y x ubuntu.iso -osource-files - subprocess.run( - [ - seven_zip_cmd, - "-y", - "x", - str(iso_path), - f"-o{self.source_files_dir}", - ], - capture_output=True, - text=True, - check=True, - ) - - logger.info("ISO extracted successfully") - - # Move [BOOT] directory as per guide: mv '[BOOT]' ../BOOT - boot_source = self.source_files_dir / "[BOOT]" - if boot_source.exists(): - shutil.move(str(boot_source), str(self.boot_dir)) - logger.info(f"Moved [BOOT] directory to {self.boot_dir}") - else: - logger.warning("[BOOT] directory not found in extracted files") - - return True - - except subprocess.CalledProcessError as e: - logger.error(f"Failed to extract ISO: {e.stderr}") - return False - except Exception as e: - logger.error(f"Error extracting ISO: {e}") - return False - - def modify_grub_config(self) -> bool: - """Modify GRUB configuration to add autoinstall menu entry.""" - logger.info("Modifying GRUB configuration...") - - if not self.grub_cfg_path.exists(): - logger.error(f"GRUB config not found: {self.grub_cfg_path}") - return False - - try: - # Read existing GRUB config - with open(self.grub_cfg_path, "r", encoding="utf-8") as f: - grub_content = f.read() - - # Autoinstall menu entry as per guide - autoinstall_entry = """menuentry "Autoinstall Ubuntu Server" { - set gfxpayload=keep - linux /casper/vmlinuz quiet autoinstall ds=nocloud\\;s=/cdrom/server/ --- - initrd /casper/initrd -} - -""" - - # Insert autoinstall entry at the beginning of menu entries - # Find the first menuentry and insert before it - import re - - first_menu_match = re.search(r'(menuentry\s+["\'])', grub_content) - if first_menu_match: - insert_pos = first_menu_match.start() - modified_content = ( - grub_content[:insert_pos] - + autoinstall_entry - + grub_content[insert_pos:] - ) - else: - # Fallback: append at the end - modified_content = grub_content + "\n" + autoinstall_entry - - # Write modified GRUB config - with open(self.grub_cfg_path, "w", encoding="utf-8") as f: - f.write(modified_content) - - logger.info("GRUB configuration modified successfully") - return True - - except Exception as e: - logger.error(f"Failed to modify GRUB config: {e}") - return False - - def create_autoinstall_config(self, user_data: str) -> bool: - """Create autoinstall configuration in server/ directory.""" - logger.info("Creating autoinstall configuration...") - - try: - # Create server directory - self.server_dir.mkdir(exist_ok=True, parents=True) - - # Create empty meta-data file (as per guide) - meta_data_path = self.server_dir / "meta-data" - meta_data_path.touch() - logger.info(f"Created empty meta-data: {meta_data_path}") - - # Create user-data file with autoinstall configuration - user_data_path = self.server_dir / "user-data" - with open(user_data_path, "w", encoding="utf-8") as f: - f.write(user_data) - logger.info(f"Created user-data: {user_data_path}") - - return True - - except Exception as e: - logger.error(f"Failed to create autoinstall config: {e}") - return False - - def rebuild_iso(self, output_path: Path) -> bool: - """Rebuild ISO with autoinstall configuration using xorriso.""" - logger.info(f"Rebuilding ISO: {output_path}") - - try: - # Change to source-files directory for xorriso command - original_cwd = os.getcwd() - os.chdir(self.source_files_dir) - - # Remove existing output file - if output_path.exists(): - output_path.unlink() - - # Try different ISO creation methods in order of preference - success = False - - # Method 1: xorriso (most complete) - if shutil.which("xorriso") and not success: - try: - logger.info("Trying xorriso method...") - cmd = [ - "xorriso", - "-as", - "mkisofs", - "-r", - "-V", - f"Ubuntu 24.04 LTS AUTO (EFIBIOS)", - "-o", - str(output_path), - "--grub2-mbr", - f"..{os.sep}BOOT{os.sep}1-Boot-NoEmul.img", - "-partition_offset", - "16", - "--mbr-force-bootable", - "-append_partition", - "2", - "28732ac11ff8d211ba4b00a0c93ec93b", - f"..{os.sep}BOOT{os.sep}2-Boot-NoEmul.img", - "-appended_part_as_gpt", - "-iso_mbr_part_type", - "a2a0d0ebe5b9334487c068b6b72699c7", - "-c", - "/boot.catalog", - "-b", - "/boot/grub/i386-pc/eltorito.img", - "-no-emul-boot", - "-boot-load-size", - "4", - "-boot-info-table", - "--grub2-boot-info", - "-eltorito-alt-boot", - "-e", - "--interval:appended_partition_2:::", - "-no-emul-boot", - ".", - ] - subprocess.run(cmd, capture_output=True, text=True, check=True) - success = True - logger.info("✅ ISO created with xorriso") - except subprocess.CalledProcessError as e: - logger.warning(f"xorriso failed: {e.stderr}") - if output_path.exists(): - output_path.unlink() - - # Method 2: mkisofs with joliet-long - if shutil.which("mkisofs") and not success: - try: - logger.info("Trying mkisofs with joliet-long...") - cmd = [ - "mkisofs", - "-r", - "-V", - f"Ubuntu 24.04 LTS AUTO", - "-cache-inodes", - "-J", - "-joliet-long", - "-l", - "-b", - "boot/grub/i386-pc/eltorito.img", - "-c", - "boot.catalog", - "-no-emul-boot", - "-boot-load-size", - "4", - "-boot-info-table", - "-o", - str(output_path), - ".", - ] - subprocess.run(cmd, capture_output=True, text=True, check=True) - success = True - logger.info("✅ ISO created with mkisofs (joliet-long)") - except subprocess.CalledProcessError as e: - logger.warning(f"mkisofs with joliet-long failed: {e.stderr}") - if output_path.exists(): - output_path.unlink() - - # Method 3: mkisofs without Joliet (fallback) - if shutil.which("mkisofs") and not success: - try: - logger.info("Trying mkisofs without Joliet (fallback)...") - cmd = [ - "mkisofs", - "-r", - "-V", - f"Ubuntu 24.04 LTS AUTO", - "-cache-inodes", - "-l", # No -J (Joliet) to avoid filename conflicts - "-b", - "boot/grub/i386-pc/eltorito.img", - "-c", - "boot.catalog", - "-no-emul-boot", - "-boot-load-size", - "4", - "-boot-info-table", - "-o", - str(output_path), - ".", - ] - subprocess.run(cmd, capture_output=True, text=True, check=True) - success = True - logger.info("✅ ISO created with mkisofs (no Joliet)") - except subprocess.CalledProcessError as e: - logger.warning( - f"mkisofs without Joliet failed: { - e.stderr}" - ) - if output_path.exists(): - output_path.unlink() - - # Method 4: macOS hdiutil - if shutil.which("hdiutil") and not success: - try: - logger.info("Trying hdiutil (macOS)...") - cmd = [ - "hdiutil", - "makehybrid", - "-iso", - "-joliet", - "-o", - str(output_path), - ".", - ] - subprocess.run(cmd, capture_output=True, text=True, check=True) - success = True - logger.info("✅ ISO created with hdiutil") - except subprocess.CalledProcessError as e: - logger.warning(f"hdiutil failed: {e.stderr}") - if output_path.exists(): - output_path.unlink() - - if not success: - logger.error("All ISO creation methods failed") - return False - - # Verify the output file was created - if not output_path.exists(): - logger.error("ISO file was not created despite success message") - return False - - logger.info(f"ISO rebuilt successfully: {output_path}") - logger.info( - f"ISO size: {output_path.stat().st_size / (1024 * 1024):.1f} MB" - ) - return True - - except Exception as e: - logger.error(f"Error rebuilding ISO: {e}") - return False - finally: - # Return to original directory - os.chdir(original_cwd) - - def build_autoinstall_iso( - self, user_data: str, output_path: Path, ubuntu_version: str = "24.04" - ) -> bool: - """Complete ISO build process following the Ubuntu autoinstall guide.""" - logger.info( - f"🚀 Starting Ubuntu {ubuntu_version} autoinstall ISO build process" - ) - - try: - # Step 1: Check tools - if not self.check_tools(): - return False - - # Step 2: Download Ubuntu ISO - iso_path = self.download_ubuntu_iso(ubuntu_version) - - # Step 3: Extract ISO - if not self.extract_iso(iso_path): - return False - - # Step 4: Modify GRUB - if not self.modify_grub_config(): - return False - - # Step 5: Create autoinstall config - if not self.create_autoinstall_config(user_data): - return False - - # Step 6: Rebuild ISO - if not self.rebuild_iso(output_path): - return False - - logger.info(f"🎉 Successfully created autoinstall ISO: {output_path}") - logger.info(f"📁 Work directory: {self.work_dir}") - return True - - except Exception as e: - logger.error(f"Failed to build autoinstall ISO: {e}") - return False - - def cleanup(self): - """Clean up temporary work directory.""" - if self.work_dir.exists(): - shutil.rmtree(self.work_dir) - logger.info(f"Cleaned up work directory: {self.work_dir}") - - -def main(): - """Test the ISO builder.""" - import logging - - logging.basicConfig(level=logging.INFO) - - # Sample autoinstall user-data - user_data = """#cloud-config -autoinstall: - version: 1 - packages: - - ubuntu-server - identity: - realname: 'Test User' - username: testuser - password: '$6$rounds=4096$saltsalt$[AWS-SECRET-REMOVED]AzpI8g8T14F8VnhXo0sUkZV2NV6/.c77tHgVi34DgbPu.' - hostname: test-vm - locale: en_US.UTF-8 - keyboard: - layout: us - storage: - layout: - name: direct - ssh: - install-server: true - late-commands: - - curtin in-target -- apt-get autoremove -y -""" - - builder = UbuntuISOBuilder("test-vm") - output_path = Path("/tmp/ubuntu-24.04-autoinstall.iso") - - success = builder.build_autoinstall_iso(user_data, output_path) - if success: - print(f"✅ ISO created: {output_path}") - else: - print("❌ ISO creation failed") - - # Optionally clean up - # builder.cleanup() - - -if __name__ == "__main__": - main() diff --git a/shared/scripts/unraid/main.py b/shared/scripts/unraid/main.py deleted file mode 100644 index 80786d21..00000000 --- a/shared/scripts/unraid/main.py +++ /dev/null @@ -1,288 +0,0 @@ -#!/usr/bin/env python3 -""" -Unraid VM Manager for ThrillWiki - Main Orchestrator -Follows the Ubuntu autoinstall guide exactly: -1. Creates modified Ubuntu ISO with autoinstall configuration -2. Manages VM lifecycle on Unraid server -3. Handles ThrillWiki deployment automation -""" - -import os -import sys -import logging -from pathlib import Path - -# Import our modular components -from iso_builder import UbuntuISOBuilder -from vm_manager import UnraidVMManager - -# Configuration -UNRAID_HOST = os.environ.get("UNRAID_HOST", "localhost") -UNRAID_USER = os.environ.get("UNRAID_USER", "root") -VM_NAME = os.environ.get("VM_NAME", "thrillwiki-vm") -VM_MEMORY = int(os.environ.get("VM_MEMORY", 4096)) # MB -VM_VCPUS = int(os.environ.get("VM_VCPUS", 2)) -VM_DISK_SIZE = int(os.environ.get("VM_DISK_SIZE", 50)) # GB -SSH_PUBLIC_KEY = os.environ.get("SSH_PUBLIC_KEY", "") - -# Network Configuration -VM_IP = os.environ.get("VM_IP", "dhcp") -VM_GATEWAY = os.environ.get("VM_GATEWAY", "192.168.20.1") -VM_NETMASK = os.environ.get("VM_NETMASK", "255.255.255.0") -VM_NETWORK = os.environ.get("VM_NETWORK", "192.168.20.0/24") - -# GitHub Configuration -REPO_URL = os.environ.get("REPO_URL", "") -GITHUB_USERNAME = os.environ.get("GITHUB_USERNAME", "") -GITHUB_TOKEN = os.environ.get("GITHUB_TOKEN", "") - -# Ubuntu version preference -UBUNTU_VERSION = os.environ.get("UBUNTU_VERSION", "24.04") - -# Setup logging -os.makedirs("logs", exist_ok=True) -logging.basicConfig( - level=logging.INFO, - format="%(asctime)s - %(levelname)s - %(message)s", - handlers=[ - logging.FileHandler("logs/unraid-vm.log"), - logging.StreamHandler(), - ], -) -logger = logging.getLogger(__name__) - - -class ThrillWikiVMOrchestrator: - """Main orchestrator for ThrillWiki VM deployment.""" - - def __init__(self): - self.vm_manager = UnraidVMManager(VM_NAME, UNRAID_HOST, UNRAID_USER) - self.iso_builder = None - - def create_autoinstall_user_data(self) -> str: - """Create autoinstall user-data configuration.""" - # Read autoinstall template - template_path = Path(__file__).parent / "autoinstall-user-data.yaml" - if not template_path.exists(): - raise FileNotFoundError(f"Autoinstall template not found: {template_path}") - - with open(template_path, "r", encoding="utf-8") as f: - template = f.read() - - # Replace placeholders using string replacement (avoiding .format() due - # to curly braces in YAML) - user_data = template.replace( - "{SSH_PUBLIC_KEY}", - SSH_PUBLIC_KEY if SSH_PUBLIC_KEY else "# No SSH key provided", - ).replace("{GITHUB_REPO}", REPO_URL if REPO_URL else "") - - # Update network configuration based on VM_IP setting - if VM_IP.lower() == "dhcp": - # Keep DHCP configuration as-is - pass - else: - # Replace with static IP configuration - network_config = f"""dhcp4: false - addresses: - - {VM_IP}/24 - gateway4: {VM_GATEWAY} - nameservers: - addresses: - - 8.8.8.8 - - 8.8.4.4""" - user_data = user_data.replace("dhcp4: true", network_config) - - return user_data - - def build_autoinstall_iso(self) -> Path: - """Build Ubuntu autoinstall ISO following the guide.""" - logger.info("🔨 Building Ubuntu autoinstall ISO...") - - # Create ISO builder - self.iso_builder = UbuntuISOBuilder(VM_NAME) - - # Create user-data configuration - user_data = self.create_autoinstall_user_data() - - # Build autoinstall ISO - iso_output_path = Path(f"/tmp/{VM_NAME}-ubuntu-autoinstall.iso") - - success = self.iso_builder.build_autoinstall_iso( - user_data=user_data, - output_path=iso_output_path, - ubuntu_version=UBUNTU_VERSION, - ) - - if not success: - raise RuntimeError("Failed to build autoinstall ISO") - - logger.info(f"✅ Autoinstall ISO built successfully: {iso_output_path}") - return iso_output_path - - def deploy_vm(self) -> bool: - """Complete VM deployment process.""" - try: - logger.info("🚀 Starting ThrillWiki VM deployment...") - - # Step 1: Check SSH connectivity - logger.info("📡 Testing Unraid connectivity...") - if not self.vm_manager.authenticate(): - logger.error("❌ Cannot connect to Unraid server") - return False - - # Step 2: Build autoinstall ISO - logger.info("🔨 Building Ubuntu autoinstall ISO...") - iso_path = self.build_autoinstall_iso() - - # Step 3: Upload ISO to Unraid - logger.info("📤 Uploading autoinstall ISO to Unraid...") - self.vm_manager.upload_iso_to_unraid(iso_path) - - # Step 4: Create/update VM configuration - logger.info("⚙️ Creating VM configuration...") - success = self.vm_manager.create_vm( - vm_memory=VM_MEMORY, - vm_vcpus=VM_VCPUS, - vm_disk_size=VM_DISK_SIZE, - vm_ip=VM_IP, - ) - - if not success: - logger.error("❌ Failed to create VM configuration") - return False - - # Step 5: Start VM - logger.info("🟢 Starting VM...") - success = self.vm_manager.start_vm() - - if not success: - logger.error("❌ Failed to start VM") - return False - - logger.info("🎉 VM deployment completed successfully!") - logger.info("") - logger.info("📋 Next Steps:") - logger.info("1. VM is now booting with Ubuntu autoinstall") - logger.info("2. Installation will take 15-30 minutes") - logger.info("3. Use 'python main.py ip' to get VM IP when ready") - logger.info("4. SSH to VM and run /home/thrillwiki/deploy-thrillwiki.sh") - logger.info("") - - return True - - except Exception as e: - logger.error(f"❌ VM deployment failed: {e}") - return False - finally: - # Cleanup ISO builder temp files - if self.iso_builder: - self.iso_builder.cleanup() - - def get_vm_info(self) -> dict: - """Get VM information.""" - return { - "name": VM_NAME, - "status": self.vm_manager.vm_status(), - "ip": self.vm_manager.get_vm_ip(), - "memory": VM_MEMORY, - "vcpus": VM_VCPUS, - "disk_size": VM_DISK_SIZE, - } - - -def main(): - """Main entry point.""" - import argparse - - parser = argparse.ArgumentParser( - description="ThrillWiki VM Manager - Ubuntu Autoinstall on Unraid", - epilog=""" -Examples: - python main.py setup # Complete VM setup with autoinstall - python main.py start # Start existing VM - python main.py ip # Get VM IP address - python main.py status # Get VM status - python main.py delete # Remove VM completely - """, - formatter_class=argparse.RawDescriptionHelpFormatter, - ) - - parser.add_argument( - "action", - choices=[ - "setup", - "create", - "start", - "stop", - "status", - "ip", - "delete", - "info", - ], - help="Action to perform", - ) - - args = parser.parse_args() - - # Create orchestrator - orchestrator = ThrillWikiVMOrchestrator() - - if args.action == "setup": - logger.info("🚀 Setting up complete ThrillWiki VM environment...") - success = orchestrator.deploy_vm() - sys.exit(0 if success else 1) - - elif args.action == "create": - logger.info("⚙️ Creating VM configuration...") - success = orchestrator.vm_manager.create_vm( - VM_MEMORY, VM_VCPUS, VM_DISK_SIZE, VM_IP - ) - sys.exit(0 if success else 1) - - elif args.action == "start": - logger.info("🟢 Starting VM...") - success = orchestrator.vm_manager.start_vm() - sys.exit(0 if success else 1) - - elif args.action == "stop": - logger.info("🛑 Stopping VM...") - success = orchestrator.vm_manager.stop_vm() - sys.exit(0 if success else 1) - - elif args.action == "status": - status = orchestrator.vm_manager.vm_status() - print(f"VM Status: {status}") - sys.exit(0) - - elif args.action == "ip": - ip = orchestrator.vm_manager.get_vm_ip() - if ip: - print(f"VM IP: {ip}") - print(f"SSH: ssh thrillwiki@{ip}") - print( - f"Deploy: ssh thrillwiki@{ip} '/home/thrillwiki/deploy-thrillwiki.sh'" - ) - sys.exit(0) - else: - print("❌ Failed to get VM IP (VM may not be ready yet)") - sys.exit(1) - - elif args.action == "info": - info = orchestrator.get_vm_info() - print("🖥️ VM Information:") - print(f" Name: {info['name']}") - print(f" Status: {info['status']}") - print(f" IP: {info['ip'] or 'Not available'}") - print(f" Memory: {info['memory']} MB") - print(f" vCPUs: {info['vcpus']}") - print(f" Disk: {info['disk_size']} GB") - sys.exit(0) - - elif args.action == "delete": - logger.info("🗑️ Deleting VM and all files...") - success = orchestrator.vm_manager.delete_vm() - sys.exit(0 if success else 1) - - -if __name__ == "__main__": - main() diff --git a/shared/scripts/unraid/main_template.py b/shared/scripts/unraid/main_template.py deleted file mode 100644 index 105445b6..00000000 --- a/shared/scripts/unraid/main_template.py +++ /dev/null @@ -1,456 +0,0 @@ -#!/usr/bin/env python3 -""" -Unraid VM Manager for ThrillWiki - Template-Based Main Orchestrator -Uses pre-built template VMs for fast deployment instead of autoinstall. -""" - -import os -import sys -import logging -from pathlib import Path - -# Import our modular components -from template_manager import TemplateVMManager -from vm_manager_template import UnraidTemplateVMManager - - -class ConfigLoader: - """Dynamic configuration loader that reads environment variables when needed.""" - - def __init__(self): - # Try to load ***REMOVED***.unraid if it exists to ensure we have the - # latest config - self._load_env_file() - - def _load_env_file(self): - """Load ***REMOVED***.unraid file if it exists.""" - # Find the project directory (two levels up from this script) - script_dir = Path(__file__).parent - project_dir = script_dir.parent.parent - env_file = project_dir / "***REMOVED***.unraid" - - if env_file.exists(): - try: - with open(env_file, "r") as f: - for line in f: - line = line.strip() - if line and not line.startswith("#") and "=" in line: - key, value = line.split("=", 1) - # Remove quotes if present - value = value.strip("\"'") - # Only set if not already in environment (env vars - # take precedence) - if key not in os.environ: - os.environ[key] = value - - logging.info(f"📝 Loaded configuration from {env_file}") - except Exception as e: - logging.warning(f"⚠️ Could not load ***REMOVED***.unraid: {e}") - - @property - def UNRAID_HOST(self): - return os.environ.get("UNRAID_HOST", "localhost") - - @property - def UNRAID_USER(self): - return os.environ.get("UNRAID_USER", "root") - - @property - def VM_NAME(self): - return os.environ.get("VM_NAME", "thrillwiki-vm") - - @property - def VM_MEMORY(self): - return int(os.environ.get("VM_MEMORY", 4096)) - - @property - def VM_VCPUS(self): - return int(os.environ.get("VM_VCPUS", 2)) - - @property - def VM_DISK_SIZE(self): - return int(os.environ.get("VM_DISK_SIZE", 50)) - - @property - def SSH_PUBLIC_KEY(self): - return os.environ.get("SSH_PUBLIC_KEY", "") - - @property - def VM_IP(self): - return os.environ.get("VM_IP", "dhcp") - - @property - def VM_GATEWAY(self): - return os.environ.get("VM_GATEWAY", "192.168.20.1") - - @property - def VM_NETMASK(self): - return os.environ.get("VM_NETMASK", "255.255.255.0") - - @property - def VM_NETWORK(self): - return os.environ.get("VM_NETWORK", "192.168.20.0/24") - - @property - def REPO_URL(self): - return os.environ.get("REPO_URL", "") - - @property - def GITHUB_USERNAME(self): - return os.environ.get("GITHUB_USERNAME", "") - - @property - def GITHUB_TOKEN(self): - return os.environ.get("GITHUB_TOKEN", "") - - -# Create a global configuration instance -config = ConfigLoader() - -# Setup logging with reduced buffering -os.makedirs("logs", exist_ok=True) - -# Configure console handler with line buffering -console_handler = logging.StreamHandler(sys.stdout) -console_handler.setLevel(logging.INFO) -console_handler.setFormatter( - logging.Formatter("%(asctime)s - %(levelname)s - %(message)s") -) -# Force flush after each log message -console_handler.flush = lambda: sys.stdout.flush() - -# Configure file handler -file_handler = logging.FileHandler("logs/unraid-vm.log") -file_handler.setLevel(logging.INFO) -file_handler.setFormatter( - logging.Formatter("%(asctime)s - %(levelname)s - %(message)s") -) - -# Set up basic config with both handlers -logging.basicConfig( - level=logging.INFO, - handlers=[file_handler, console_handler], -) - -# Ensure stdout is line buffered for real-time output -sys.stdout.reconfigure(line_buffering=True) -logger = logging.getLogger(__name__) - - -class ThrillWikiTemplateVMOrchestrator: - """Main orchestrator for template-based ThrillWiki VM deployment.""" - - def __init__(self): - # Log current configuration for debugging - logger.info( - f"🔧 Using configuration: UNRAID_HOST={ - config.UNRAID_HOST}, UNRAID_USER={ - config.UNRAID_USER}, VM_NAME={ - config.VM_NAME}" - ) - - self.template_manager = TemplateVMManager( - config.UNRAID_HOST, config.UNRAID_USER - ) - self.vm_manager = UnraidTemplateVMManager( - config.VM_NAME, config.UNRAID_HOST, config.UNRAID_USER - ) - - def check_template_ready(self) -> bool: - """Check if template VM is ready for use.""" - logger.info("🔍 Checking template VM availability...") - - if not self.template_manager.check_template_exists(): - logger.error("❌ Template VM disk not found!") - logger.error( - "Please ensure 'thrillwiki-template-ubuntu' VM exists and is properly configured" - ) - logger.error( - "Template should be located at: /mnt/user/domains/thrillwiki-template-ubuntu/vdisk1.qcow2" - ) - return False - - # Check template status - if not self.template_manager.update_template(): - logger.warning("⚠️ Template VM may be running - this could cause issues") - logger.warning( - "Ensure the template VM is stopped before creating new instances" - ) - - info = self.template_manager.get_template_info() - if info: - logger.info(f"📋 Template Info:") - logger.info(f" Virtual Size: {info['virtual_size']}") - logger.info(f" File Size: {info['file_size']}") - logger.info(f" Last Modified: {info['last_modified']}") - - return True - - def deploy_vm_from_template(self) -> bool: - """Complete template-based VM deployment process.""" - try: - logger.info("🚀 Starting ThrillWiki template-based VM deployment...") - - # Step 1: Check SSH connectivity - logger.info("📡 Testing Unraid connectivity...") - if not self.vm_manager.authenticate(): - logger.error("❌ Cannot connect to Unraid server") - return False - - # Step 2: Check template availability - logger.info("🔍 Verifying template VM...") - if not self.check_template_ready(): - logger.error("❌ Template VM not ready") - return False - - # Step 3: Create VM from template - logger.info("⚙️ Creating VM from template...") - success = self.vm_manager.create_vm_from_template( - vm_memory=config.VM_MEMORY, - vm_vcpus=config.VM_VCPUS, - vm_disk_size=config.VM_DISK_SIZE, - vm_ip=config.VM_IP, - ) - - if not success: - logger.error("❌ Failed to create VM from template") - return False - - # Step 4: Start VM - logger.info("🟢 Starting VM...") - success = self.vm_manager.start_vm() - - if not success: - logger.error("❌ Failed to start VM") - return False - - logger.info("🎉 Template-based VM deployment completed successfully!") - logger.info("") - logger.info("📋 Next Steps:") - logger.info("1. VM is now booting from template disk") - logger.info("2. Boot time should be much faster (2-5 minutes)") - logger.info("3. Use 'python main_template.py ip' to get VM IP when ready") - logger.info("4. SSH to VM and run deployment commands") - logger.info("") - - return True - - except Exception as e: - logger.error(f"❌ Template VM deployment failed: {e}") - return False - - def deploy_and_configure_thrillwiki(self) -> bool: - """Deploy VM from template and configure ThrillWiki.""" - try: - logger.info("🚀 Starting complete ThrillWiki deployment from template...") - - # Step 1: Deploy VM from template - if not self.deploy_vm_from_template(): - return False - - # Step 2: Wait for VM to be accessible and configure ThrillWiki - if config.REPO_URL: - logger.info("🔧 Configuring ThrillWiki on VM...") - success = self.vm_manager.customize_vm_for_thrillwiki( - config.REPO_URL, config.GITHUB_TOKEN - ) - - if success: - vm_ip = self.vm_manager.get_vm_ip() - logger.info("🎉 Complete ThrillWiki deployment successful!") - logger.info(f"🌐 ThrillWiki is available at: http://{vm_ip}:8000") - else: - logger.warning( - "⚠️ VM deployed but ThrillWiki configuration may have failed" - ) - logger.info( - "You can manually configure ThrillWiki by SSH'ing to the VM" - ) - else: - logger.info( - "📝 No repository URL provided - VM deployed but ThrillWiki not configured" - ) - logger.info( - "Set REPO_URL environment variable to auto-configure ThrillWiki" - ) - - return True - - except Exception as e: - logger.error(f"❌ Complete deployment failed: {e}") - return False - - def get_vm_info(self) -> dict: - """Get VM information.""" - return { - "name": config.VM_NAME, - "status": self.vm_manager.vm_status(), - "ip": self.vm_manager.get_vm_ip(), - "memory": config.VM_MEMORY, - "vcpus": config.VM_VCPUS, - "disk_size": config.VM_DISK_SIZE, - "deployment_type": "template-based", - } - - -def main(): - """Main entry point.""" - import argparse - - parser = argparse.ArgumentParser( - description="ThrillWiki Template-Based VM Manager - Fast VM deployment using templates", - epilog=""" -Examples: - python main_template.py setup # Deploy VM from template only - python main_template.py deploy # Deploy VM and configure ThrillWiki - python main_template.py start # Start existing VM - python main_template.py ip # Get VM IP address - python main_template.py status # Get VM status - python main_template.py delete # Remove VM completely - python main_template.py template # Manage template VM - """, - formatter_class=argparse.RawDescriptionHelpFormatter, - ) - - parser.add_argument( - "action", - choices=[ - "setup", - "deploy", - "create", - "start", - "stop", - "status", - "ip", - "delete", - "info", - "template", - ], - help="Action to perform", - ) - - parser.add_argument( - "template_action", - nargs="?", - choices=["info", "check", "update", "list"], - help="Template management action (used with 'template' action)", - ) - - args = parser.parse_args() - - # Create orchestrator - orchestrator = ThrillWikiTemplateVMOrchestrator() - - if args.action == "setup": - logger.info("🚀 Setting up VM from template...") - success = orchestrator.deploy_vm_from_template() - sys.exit(0 if success else 1) - - elif args.action == "deploy": - logger.info("🚀 Complete ThrillWiki deployment from template...") - success = orchestrator.deploy_and_configure_thrillwiki() - sys.exit(0 if success else 1) - - elif args.action == "create": - logger.info("⚙️ Creating VM from template...") - success = orchestrator.vm_manager.create_vm_from_template( - config.VM_MEMORY, - config.VM_VCPUS, - config.VM_DISK_SIZE, - config.VM_IP, - ) - sys.exit(0 if success else 1) - - elif args.action == "start": - logger.info("🟢 Starting VM...") - success = orchestrator.vm_manager.start_vm() - sys.exit(0 if success else 1) - - elif args.action == "stop": - logger.info("🛑 Stopping VM...") - success = orchestrator.vm_manager.stop_vm() - sys.exit(0 if success else 1) - - elif args.action == "status": - status = orchestrator.vm_manager.vm_status() - print(f"VM Status: {status}") - sys.exit(0) - - elif args.action == "ip": - ip = orchestrator.vm_manager.get_vm_ip() - if ip: - print(f"VM IP: {ip}") - print(f"SSH: ssh thrillwiki@{ip}") - print(f"ThrillWiki: http://{ip}:8000") - sys.exit(0) - else: - print("❌ Failed to get VM IP (VM may not be ready yet)") - sys.exit(1) - - elif args.action == "info": - info = orchestrator.get_vm_info() - print("🖥️ VM Information:") - print(f" Name: {info['name']}") - print(f" Status: {info['status']}") - print(f" IP: {info['ip'] or 'Not available'}") - print(f" Memory: {info['memory']} MB") - print(f" vCPUs: {info['vcpus']}") - print(f" Disk: {info['disk_size']} GB") - print(f" Type: {info['deployment_type']}") - sys.exit(0) - - elif args.action == "delete": - logger.info("🗑️ Deleting VM and all files...") - success = orchestrator.vm_manager.delete_vm() - sys.exit(0 if success else 1) - - elif args.action == "template": - template_action = args.template_action or "info" - - if template_action == "info": - logger.info("📋 Template VM Information") - info = orchestrator.template_manager.get_template_info() - if info: - print(f"Template Path: {info['template_path']}") - print(f"Virtual Size: {info['virtual_size']}") - print(f"File Size: {info['file_size']}") - print(f"Last Modified: {info['last_modified']}") - else: - print("❌ Failed to get template information") - sys.exit(1) - - elif template_action == "check": - if orchestrator.template_manager.check_template_exists(): - logger.info("✅ Template VM disk exists and is ready to use") - sys.exit(0) - else: - logger.error("❌ Template VM disk not found") - sys.exit(1) - - elif template_action == "update": - success = orchestrator.template_manager.update_template() - sys.exit(0 if success else 1) - - elif template_action == "list": - logger.info("📋 Template-based VM Instances") - instances = orchestrator.template_manager.list_template_instances() - if instances: - for instance in instances: - status_emoji = ( - "🟢" - if instance["status"] == "running" - else "🔴" if instance["status"] == "shut off" else "🟡" - ) - print( - f"{status_emoji} { - instance['name']} ({ - instance['status']})" - ) - else: - print("No template instances found") - - sys.exit(0) - - -if __name__ == "__main__": - main() diff --git a/shared/scripts/unraid/setup-complete-automation.sh b/shared/scripts/unraid/setup-complete-automation.sh deleted file mode 100755 index 34095eeb..00000000 --- a/shared/scripts/unraid/setup-complete-automation.sh +++ /dev/null @@ -1,1109 +0,0 @@ -#!/bin/bash - -# ThrillWiki Complete Unraid Automation Setup -# This script automates the entire VM creation and deployment process on Unraid -# -# Usage: -# ./setup-complete-automation.sh # Standard setup -# ./setup-complete-automation.sh --reset # Delete VM and config, start completely fresh -# ./setup-complete-automation.sh --reset-vm # Delete VM only, keep configuration -# ./setup-complete-automation.sh --reset-config # Delete config only, keep VM - -# Function to show help -show_help() { - echo "ThrillWiki CI/CD Automation Setup" - echo "" - echo "Usage:" - echo " $0 Set up or update ThrillWiki automation" - echo " $0 -y Non-interactive mode, use saved configuration" - echo " $0 --reset Delete VM and config, start completely fresh" - echo " $0 --reset-vm Delete VM only, keep configuration" - echo " $0 --reset-config Delete config only, keep VM" - echo " $0 --help Show this help message" - echo "" - echo "Options:" - echo " -y, --yes Non-interactive mode - use saved configuration" - echo " and passwords without prompting. Requires existing" - echo " configuration file with saved settings." - echo "" - echo "Reset Options:" - echo " --reset Completely removes existing VM, disks, and config" - echo " before starting fresh installation" - echo " --reset-vm Removes only the VM and disks, preserves saved" - echo " configuration to avoid re-entering settings" - echo " --reset-config Removes only the saved configuration, preserves" - echo " VM and prompts for fresh configuration input" - echo " --help Display this help and exit" - echo "" - echo "Examples:" - echo " $0 # Normal setup/update" - echo " $0 -y # Non-interactive setup with saved config" - echo " $0 --reset # Complete fresh installation" - echo " $0 --reset-vm # Fresh VM with saved settings" - echo " $0 --reset-config # Re-configure existing VM" - exit 0 -} - -# Check for help flag -if [[ "$1" == "--help" || "$1" == "-h" ]]; then - show_help -fi - -# Parse command line flags -RESET_ALL=false -RESET_VM_ONLY=false -RESET_CONFIG_ONLY=false -NON_INTERACTIVE=false - -# Process all arguments -while [[ $# -gt 0 ]]; do - case $1 in - -y|--yes) - NON_INTERACTIVE=true - echo "🤖 NON-INTERACTIVE MODE: Using saved configuration only" - shift - ;; - --reset) - RESET_ALL=true - echo "🔄 COMPLETE RESET MODE: Will delete VM and configuration" - shift - ;; - --reset-vm) - RESET_VM_ONLY=true - echo "🔄 VM RESET MODE: Will delete VM only, keep configuration" - shift - ;; - --reset-config) - RESET_CONFIG_ONLY=true - echo "🔄 CONFIG RESET MODE: Will delete configuration only, keep VM" - shift - ;; - --help|-h) - show_help - ;; - *) - echo "Unknown option: $1" - show_help - ;; - esac -done - -set -e - -# Colors for output -RED='\033[0;31m' -GREEN='\033[0;32m' -YELLOW='\033[1;33m' -BLUE='\033[0;34m' -NC='\033[0m' # No Color - -log() { - echo -e "${BLUE}[AUTOMATION]${NC} $1" -} - -log_success() { - echo -e "${GREEN}[SUCCESS]${NC} $1" -} - -log_warning() { - echo -e "${YELLOW}[WARNING]${NC} $1" -} - -log_error() { - echo -e "${RED}[ERROR]${NC} $1" -} - -# Configuration -SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)" -LOG_DIR="$PROJECT_DIR/logs" - -# Default values -DEFAULT_UNRAID_HOST="" -DEFAULT_VM_NAME="thrillwiki-vm" -DEFAULT_VM_MEMORY="4096" -DEFAULT_VM_VCPUS="2" -DEFAULT_VM_DISK_SIZE="50" -DEFAULT_WEBHOOK_PORT="9000" - -# Configuration file -CONFIG_FILE="$PROJECT_DIR/.thrillwiki-config" - -# Function to save configuration -save_config() { - log "Saving configuration to $CONFIG_FILE..." - cat > "$CONFIG_FILE" << EOF -# ThrillWiki Automation Configuration -# This file stores your settings to avoid re-entering them each time - -# Unraid Server Configuration -UNRAID_HOST="$UNRAID_HOST" -UNRAID_USER="$UNRAID_USER" -VM_NAME="$VM_NAME" -VM_MEMORY="$VM_MEMORY" -VM_VCPUS="$VM_VCPUS" -VM_DISK_SIZE="$VM_DISK_SIZE" - -# Network Configuration -VM_IP="$VM_IP" -VM_GATEWAY="$VM_GATEWAY" -VM_NETMASK="$VM_NETMASK" -VM_NETWORK="$VM_NETWORK" - -# GitHub Configuration -REPO_URL="$REPO_URL" -GITHUB_USERNAME="$GITHUB_USERNAME" -GITHUB_API_ENABLED="$GITHUB_API_ENABLED" -GITHUB_AUTH_METHOD="$GITHUB_AUTH_METHOD" - -# Webhook Configuration -WEBHOOK_PORT="$WEBHOOK_PORT" -WEBHOOK_ENABLED="$WEBHOOK_ENABLED" - -# SSH Configuration (path to key, not the key content) -SSH_KEY_PATH="$HOME/.ssh/thrillwiki_vm" -EOF - - log_success "Configuration saved to $CONFIG_FILE" -} - -# Function to load configuration -load_config() { - if [ -f "$CONFIG_FILE" ]; then - log "Loading existing configuration from $CONFIG_FILE..." - source "$CONFIG_FILE" - return 0 - else - return 1 - fi -} - -# Function for non-interactive configuration loading -load_non_interactive_config() { - log "=== Non-Interactive Configuration Loading ===" - - # Load saved configuration - if ! load_config; then - log_error "No saved configuration found. Cannot run in non-interactive mode." - log_error "Please run the script without -y flag first to create initial configuration." - exit 1 - fi - - log_success "Loaded saved configuration successfully" - - # Check for required environment variables for passwords - if [ -z "${UNRAID_PASSWORD:-}" ]; then - log_error "UNRAID_PASSWORD environment variable not set." - log_error "For non-interactive mode, set: export UNRAID_PASSWORD='your_password'" - exit 1 - fi - - # Handle GitHub authentication based on saved method - if [ -n "$GITHUB_USERNAME" ] && [ "$GITHUB_API_ENABLED" = "true" ]; then - if [ "$GITHUB_AUTH_METHOD" = "oauth" ]; then - # Check if OAuth token is still valid - if python3 "$SCRIPT_DIR/../github-auth.py" validate 2>/dev/null; then - GITHUB_TOKEN=$(python3 "$SCRIPT_DIR/../github-auth.py" token) - log "Using existing OAuth token" - else - log_error "OAuth token expired and cannot refresh in non-interactive mode" - log_error "Please run without -y flag to re-authenticate with GitHub" - exit 1 - fi - else - # Personal access token method - if [ -z "${GITHUB_TOKEN:-}" ]; then - log_error "GITHUB_TOKEN environment variable not set." - log_error "For non-interactive mode, set: export GITHUB_TOKEN='your_token'" - exit 1 - fi - fi - fi - - # Handle webhook secret - if [ "$WEBHOOK_ENABLED" = "true" ]; then - if [ -z "${WEBHOOK_SECRET:-}" ]; then - log_error "WEBHOOK_SECRET environment variable not set." - log_error "For non-interactive mode, set: export WEBHOOK_SECRET='your_secret'" - exit 1 - fi - fi - - log_success "All required credentials loaded from environment variables" - log "Configuration summary:" - echo " Unraid Host: $UNRAID_HOST" - echo " VM Name: $VM_NAME" - echo " VM IP: $VM_IP" - echo " Repository: $REPO_URL" - echo " GitHub Auth: $GITHUB_AUTH_METHOD" - echo " Webhook Enabled: $WEBHOOK_ENABLED" -} - -# Function to prompt for configuration -prompt_unraid_config() { - # In non-interactive mode, use saved config only - if [ "$NON_INTERACTIVE" = "true" ]; then - load_non_interactive_config - return 0 - fi - - log "=== Unraid VM Configuration ===" - echo - - # Try to load existing config first - if load_config; then - log_success "Loaded existing configuration" - echo "Current settings:" - echo " Unraid Host: $UNRAID_HOST" - echo " VM Name: $VM_NAME" - echo " VM IP: $VM_IP" - echo " Repository: $REPO_URL" - echo - read -p "Use existing configuration? (y/n): " use_existing - if [ "$use_existing" = "y" ] || [ "$use_existing" = "Y" ]; then - # Still need to get sensitive info that we don't save - read -s -p "Enter Unraid [PASSWORD-REMOVED] - echo - - # Handle GitHub authentication based on saved method - if [ -n "$GITHUB_USERNAME" ] && [ "$GITHUB_API_ENABLED" = "true" ]; then - if [ "$GITHUB_AUTH_METHOD" = "oauth" ]; then - # Check if OAuth token is still valid - if python3 "$SCRIPT_DIR/../github-auth.py" validate 2>/dev/null; then - GITHUB_TOKEN=$(python3 "$SCRIPT_DIR/../github-auth.py" token) - log "Using existing OAuth token" - else - log "OAuth token expired, re-authenticating..." - if python3 "$SCRIPT_DIR/../github-auth.py" login; then - GITHUB_TOKEN=$(python3 "$SCRIPT_DIR/../github-auth.py" token) - log_success "OAuth token refreshed" - else - log_error "OAuth re-authentication failed" - exit 1 - fi - fi - else - # Personal access token method - read -s -p "Enter GitHub personal access token: " GITHUB_TOKEN - echo - fi - fi - - if [ "$WEBHOOK_ENABLED" = "true" ]; then - read -s -p "Enter GitHub webhook secret: " WEBHOOK_SECRET - echo - fi - return 0 - fi - fi - - # Prompt for new configuration - read -p "Enter your Unraid server IP address: " UNRAID_HOST - save_config - - read -p "Enter Unraid username (default: root): " UNRAID_USER - UNRAID_USER=${UNRAID_USER:-root} - save_config - - read -s -p "Enter Unraid [PASSWORD-REMOVED] - echo - # Note: Password not saved for security - - read -p "Enter VM name (default: $DEFAULT_VM_NAME): " VM_NAME - VM_NAME=${VM_NAME:-$DEFAULT_VM_NAME} - save_config - - read -p "Enter VM memory in MB (default: $DEFAULT_VM_MEMORY): " VM_MEMORY - VM_MEMORY=${VM_MEMORY:-$DEFAULT_VM_MEMORY} - save_config - - read -p "Enter VM vCPUs (default: $DEFAULT_VM_VCPUS): " VM_VCPUS - VM_VCPUS=${VM_VCPUS:-$DEFAULT_VM_VCPUS} - save_config - - read -p "Enter VM disk size in GB (default: $DEFAULT_VM_DISK_SIZE): " VM_DISK_SIZE - VM_DISK_SIZE=${VM_DISK_SIZE:-$DEFAULT_VM_DISK_SIZE} - save_config - - read -p "Enter GitHub repository URL: " REPO_URL - save_config - - # GitHub API Configuration - echo - log "=== GitHub API Configuration ===" - echo "Choose GitHub authentication method:" - echo "1. OAuth Device Flow (recommended - secure, supports private repos)" - echo "2. Personal Access Token (manual token entry)" - echo "3. Skip (public repositories only)" - - while true; do - read -p "Select option (1-3): " auth_choice - case $auth_choice in - 1) - log "Using GitHub OAuth Device Flow..." - if python3 "$SCRIPT_DIR/../github-auth.py" validate 2>/dev/null; then - log "Existing GitHub authentication found and valid" - GITHUB_USERNAME=$(python3 "$SCRIPT_DIR/../github-auth.py" whoami 2>/dev/null | grep "You are authenticated as:" | cut -d: -f2 | xargs) - GITHUB_TOKEN=$(python3 "$SCRIPT_DIR/../github-auth.py" token) - else - log "Starting GitHub OAuth authentication..." - if python3 "$SCRIPT_DIR/../github-auth.py" login; then - GITHUB_USERNAME=$(python3 "$SCRIPT_DIR/../github-auth.py" whoami 2>/dev/null | grep "You are authenticated as:" | cut -d: -f2 | xargs) - GITHUB_TOKEN=$(python3 "$SCRIPT_DIR/../github-auth.py" token) - log_success "GitHub OAuth authentication completed" - else - log_error "GitHub authentication failed" - continue - fi - fi - GITHUB_API_ENABLED=true - GITHUB_AUTH_METHOD="oauth" - break - ;; - 2) - read -p "Enter GitHub username: " GITHUB_USERNAME - read -s -p "Enter GitHub personal access token: " GITHUB_TOKEN - echo - if [ -n "$GITHUB_USERNAME" ] && [ -n "$GITHUB_TOKEN" ]; then - GITHUB_API_ENABLED=true - GITHUB_AUTH_METHOD="token" - log "Personal access token configured" - else - log_error "Both username and token are required" - continue - fi - break - ;; - 3) - GITHUB_USERNAME="" - GITHUB_TOKEN="" - GITHUB_API_ENABLED=false - GITHUB_AUTH_METHOD="none" - log "Skipping GitHub API - using public access only" - break - ;; - *) - echo "Invalid option. Please select 1, 2, or 3." - ;; - esac - done - - # Save GitHub configuration - save_config - log "GitHub authentication configuration saved" - - # Webhook Configuration - echo - read -s -p "Enter GitHub webhook secret (optional, press Enter to skip): " WEBHOOK_SECRET - echo - - # If no webhook secret provided, disable webhook functionality - if [ -z "$WEBHOOK_SECRET" ]; then - log "No webhook secret provided - webhook functionality will be disabled" - WEBHOOK_ENABLED=false - else - WEBHOOK_ENABLED=true - fi - - read -p "Enter webhook port (default: $DEFAULT_WEBHOOK_PORT): " WEBHOOK_PORT - WEBHOOK_PORT=${WEBHOOK_PORT:-$DEFAULT_WEBHOOK_PORT} - - # Save webhook configuration - save_config - log "Webhook configuration saved" - - # Get VM network configuration preference - echo - log "=== Network Configuration ===" - echo "Choose network configuration method:" - echo "1. DHCP (automatic IP assignment - recommended)" - echo "2. Static IP (manual IP configuration)" - - while true; do - read -p "Select option (1-2): " network_choice - case $network_choice in - 1) - log "Using DHCP network configuration..." - VM_IP="dhcp" - VM_GATEWAY="192.168.20.1" - VM_NETMASK="255.255.255.0" - VM_NETWORK="192.168.20.0/24" - NETWORK_MODE="dhcp" - break - ;; - 2) - log "Using static IP network configuration..." - # Get VM IP address with proper range validation - while true; do - read -p "Enter VM IP address (192.168.20.10-192.168.20.100): " VM_IP - if [[ "$VM_IP" =~ ^192\.168\.20\.([1-9][0-9]|100)$ ]]; then - local ip_last_octet="${BASH_REMATCH[1]}" - if [ "$ip_last_octet" -ge 10 ] && [ "$ip_last_octet" -le 100 ]; then - break - fi - fi - echo "Invalid IP address. Please enter an IP in the range 192.168.20.10-192.168.20.100" - done - VM_GATEWAY="192.168.20.1" - VM_NETMASK="255.255.255.0" - VM_NETWORK="192.168.20.0/24" - NETWORK_MODE="static" - break - ;; - *) - echo "Invalid option. Please select 1 or 2." - ;; - esac - done - - # Save final network configuration - save_config - log "Network configuration saved - setup complete!" -} - -# Generate SSH keys for VM access -setup_ssh_keys() { - log "Setting up SSH keys for VM access..." - - local ssh_key_path="$HOME/.ssh/thrillwiki_vm" - local ssh_config_path="$HOME/.ssh/config" - - if [ ! -f "$ssh_key_path" ]; then - ssh-keygen -t rsa -b 4096 -f "$ssh_key_path" -N "" -C "thrillwiki-vm-access" - log_success "SSH key generated: $ssh_key_path" - else - log "SSH key already exists: $ssh_key_path" - fi - - # Add SSH config entry - if ! grep -q "Host $VM_NAME" "$ssh_config_path" 2>/dev/null; then - cat >> "$ssh_config_path" << EOF - -# ThrillWiki VM -Host $VM_NAME - HostName %h - User ubuntu - IdentityFile $ssh_key_path - StrictHostKeyChecking no - UserKnownHostsFile /dev/null -EOF - log_success "SSH config updated" - fi - - # Store public key for VM setup - SSH_PUBLIC_KEY=$(cat "$ssh_key_path.pub") - export SSH_PUBLIC_KEY -} - -# Setup Unraid host access -setup_unraid_access() { - log "Setting up Unraid server access..." - - local unraid_key_path="$HOME/.ssh/unraid_access" - - if [ ! -f "$unraid_key_path" ]; then - ssh-keygen -t rsa -b 4096 -f "$unraid_key_path" -N "" -C "unraid-access" - - log "Please add this public key to your Unraid server:" - echo "---" - cat "$unraid_key_path.pub" - echo "---" - echo - log "Add this to /root/.ssh/***REMOVED*** on your Unraid server" - read -p "Press Enter when you've added the key..." - fi - - # Test Unraid connection - log "Testing Unraid connection..." - if ssh -i "$unraid_key_path" -o ConnectTimeout=5 -o StrictHostKeyChecking=no "$UNRAID_USER@$UNRAID_HOST" "echo 'Connected to Unraid successfully'"; then - log_success "Unraid connection test passed" - else - log_error "Unraid connection test failed" - exit 1 - fi - - # Update SSH config for Unraid - if ! grep -q "Host unraid" "$HOME/.ssh/config" 2>/dev/null; then - cat >> "$HOME/.ssh/config" << EOF - -# Unraid Server -Host unraid - HostName $UNRAID_HOST - User $UNRAID_USER - IdentityFile $unraid_key_path - StrictHostKeyChecking no -EOF - fi -} - -# Create environment files -create_environment_files() { - log "Creating environment configuration files..." - - # Get SSH public key content safely - local ssh_key_path="$HOME/.ssh/thrillwiki_vm.pub" - local ssh_public_key="" - if [ -f "$ssh_key_path" ]; then - ssh_public_key=$(cat "$ssh_key_path") - fi - - # Unraid VM environment - cat > "$PROJECT_DIR/***REMOVED***.unraid" << EOF -# Unraid VM Configuration -UNRAID_HOST=$UNRAID_HOST -UNRAID_USER=$UNRAID_USER -UNRAID_PASSWORD=$UNRAID_PASSWORD -VM_NAME=$VM_NAME -VM_MEMORY=$VM_MEMORY -VM_VCPUS=$VM_VCPUS -VM_DISK_SIZE=$VM_DISK_SIZE -SSH_PUBLIC_KEY="$ssh_public_key" - -# Network Configuration -VM_IP=$VM_IP -VM_GATEWAY=$VM_GATEWAY -VM_NETMASK=$VM_NETMASK -VM_NETWORK=$VM_NETWORK - -# GitHub Configuration -REPO_URL=$REPO_URL -GITHUB_USERNAME=$GITHUB_USERNAME -GITHUB_TOKEN=$GITHUB_TOKEN -GITHUB_API_ENABLED=$GITHUB_API_ENABLED -EOF - - # Webhook environment (updated with VM info) - cat > "$PROJECT_DIR/***REMOVED***.webhook" << EOF -# ThrillWiki Webhook Configuration -WEBHOOK_PORT=$WEBHOOK_PORT -WEBHOOK_SECRET=$WEBHOOK_SECRET -WEBHOOK_ENABLED=$WEBHOOK_ENABLED -VM_HOST=$VM_IP -VM_PORT=22 -VM_USER=ubuntu -VM_KEY_PATH=$HOME/.ssh/thrillwiki_vm -VM_PROJECT_PATH=/home/ubuntu/thrillwiki -REPO_URL=$REPO_URL -DEPLOY_BRANCH=main - -# GitHub API Configuration -GITHUB_USERNAME=$GITHUB_USERNAME -GITHUB_TOKEN=$GITHUB_TOKEN -GITHUB_API_ENABLED=$GITHUB_API_ENABLED -EOF - - log_success "Environment files created" -} - -# Install required tools -install_dependencies() { - log "Installing required dependencies..." - - # Check for required tools - local missing_tools=() - local mac_tools=() - - command -v python3 >/dev/null 2>&1 || missing_tools+=("python3") - command -v ssh >/dev/null 2>&1 || missing_tools+=("openssh-client") - command -v scp >/dev/null 2>&1 || missing_tools+=("openssh-client") - - # Check for ISO creation tools and handle platform differences - if ! command -v genisoimage >/dev/null 2>&1 && ! command -v mkisofs >/dev/null 2>&1 && ! command -v hdiutil >/dev/null 2>&1; then - if [[ "$OSTYPE" == "linux-gnu"* ]]; then - missing_tools+=("genisoimage") - elif [[ "$OSTYPE" == "darwin"* ]]; then - # On macOS, hdiutil should be available, but add cdrtools as backup - if command -v brew >/dev/null 2>&1; then - mac_tools+=("cdrtools") - fi - fi - fi - - # Install Linux packages - if [ ${#missing_tools[@]} -gt 0 ]; then - log "Installing missing tools for Linux: ${missing_tools[*]}" - - if command -v apt-get >/dev/null 2>&1; then - sudo apt-get update - sudo apt-get install -y "${missing_tools[@]}" - elif command -v yum >/dev/null 2>&1; then - sudo yum install -y "${missing_tools[@]}" - elif command -v dnf >/dev/null 2>&1; then - sudo dnf install -y "${missing_tools[@]}" - else - log_error "Linux package manager not found. Please install: ${missing_tools[*]}" - exit 1 - fi - fi - - # Install macOS packages - if [ ${#mac_tools[@]} -gt 0 ]; then - log "Installing additional tools for macOS: ${mac_tools[*]}" - if command -v brew >/dev/null 2>&1; then - brew install "${mac_tools[@]}" - else - log "Homebrew not found. Skipping optional tool installation." - log "Note: hdiutil should be available on macOS for ISO creation" - fi - fi - - # Install Python dependencies - if [ -f "$PROJECT_DIR/pyproject.toml" ]; then - log "Installing Python dependencies with UV..." - if ! command -v uv >/dev/null 2>&1; then - curl -LsSf https://astral.sh/uv/install.sh | sh - source ~/.cargo/env - fi - uv sync - fi - - log_success "Dependencies installed" -} - -# Create VM using the VM manager -create_vm() { - log "Creating VM on Unraid server..." - - # Export all environment variables from the file - set -a # automatically export all variables - source "$PROJECT_DIR/***REMOVED***.unraid" - set +a # turn off automatic export - - # Run complete VM setup (builds ISO, creates VM, starts VM) - cd "$PROJECT_DIR" - python3 scripts/unraid/main.py setup - - if [ $? -eq 0 ]; then - log_success "VM setup completed successfully" - else - log_error "VM setup failed" - exit 1 - fi -} - -# Wait for VM to be ready and get IP -wait_for_vm() { - log "Waiting for VM to be ready..." - sleep 120 - # Export all environment variables from the file - set -a # automatically export all variables - source "$PROJECT_DIR/***REMOVED***.unraid" - set +a # turn off automatic export - - local max_attempts=60 - local attempt=1 - - while [ $attempt -le $max_attempts ]; do - VM_IP=$(python3 scripts/unraid/main.py ip 2>/dev/null | grep "VM IP:" | cut -d' ' -f3) - - if [ -n "$VM_IP" ]; then - log_success "VM is ready with IP: $VM_IP" - - # Update SSH config with actual IP - sed -i.bak "s/HostName %h/HostName $VM_IP/" "$HOME/.ssh/config" - - # Update webhook environment with IP - sed -i.bak "s/VM_HOST=$VM_NAME/VM_HOST=$VM_IP/" "$PROJECT_DIR/***REMOVED***.webhook" - - return 0 - fi - - log "Waiting for VM to get IP... (attempt $attempt/$max_attempts)" - sleep 30 - ((attempt++)) - done - - log_error "VM failed to get IP address" - exit 1 -} - -# Configure VM for ThrillWiki -configure_vm() { - log "Configuring VM for ThrillWiki deployment..." - - local vm_setup_script="/tmp/vm_thrillwiki_setup.sh" - - # Create VM setup script - cat > "$vm_setup_script" << 'EOF' -#!/bin/bash -set -e - -echo "Setting up VM for ThrillWiki..." - -# Update system -sudo apt update && sudo apt upgrade -y - -# Install required packages -sudo apt install -y git curl build-essential python3-pip lsof postgresql postgresql-contrib nginx - -# Install UV -curl -LsSf https://astral.sh/uv/install.sh | sh -source ~/.cargo/env - -# Configure PostgreSQL -sudo -u postgres psql << PSQL -CREATE DATABASE thrillwiki; -CREATE USER thrillwiki_user WITH ENCRYPTED PASSWORD 'thrillwiki_pass'; -GRANT ALL PRIVILEGES ON DATABASE thrillwiki TO thrillwiki_user; -\q -PSQL - -# Clone repository -git clone REPO_URL_PLACEHOLDER thrillwiki -cd thrillwiki - -# Install dependencies -~/.cargo/bin/uv sync - -# Create directories -mkdir -p logs backups - -# Make scripts executable -chmod +x scripts/*.sh - -# Run initial setup -~/.cargo/bin/uv run manage.py migrate -~/.cargo/bin/uv run manage.py collectstatic --noinput - -# Install systemd services -sudo cp scripts/systemd/thrillwiki.service /etc/systemd/system/ -sudo sed -i 's|/home/ubuntu|/home/ubuntu|g' /etc/systemd/system/thrillwiki.service -sudo systemctl daemon-reload -sudo systemctl enable thrillwiki.service - -echo "VM setup completed!" -EOF - - # Replace placeholder with actual repo URL - sed -i "s|REPO_URL_PLACEHOLDER|$REPO_URL|g" "$vm_setup_script" - - # Copy and execute setup script on VM - scp "$vm_setup_script" "$VM_NAME:/tmp/" - ssh "$VM_NAME" "bash /tmp/vm_thrillwiki_setup.sh" - - # Cleanup - rm "$vm_setup_script" - - log_success "VM configured for ThrillWiki" -} - -# Start services -start_services() { - log "Starting ThrillWiki services..." - - # Start VM service - ssh "$VM_NAME" "sudo systemctl start thrillwiki" - - # Verify service is running - if ssh "$VM_NAME" "systemctl is-active --quiet thrillwiki"; then - log_success "ThrillWiki service started successfully" - else - log_error "Failed to start ThrillWiki service" - exit 1 - fi - - # Get service status - log "Service status:" - ssh "$VM_NAME" "systemctl status thrillwiki --no-pager -l" -} - -# Setup webhook listener -setup_webhook_listener() { - log "Setting up webhook listener..." - - # Create webhook start script - cat > "$PROJECT_DIR/start-webhook.sh" << 'EOF' -#!/bin/bash -cd "$(dirname "$0")" -source ***REMOVED***.webhook -python3 scripts/webhook-listener.py -EOF - - chmod +x "$PROJECT_DIR/start-webhook.sh" - - log_success "Webhook listener configured" - log "You can start the webhook listener with: ./start-webhook.sh" -} - -# Perform end-to-end test -test_deployment() { - log "Performing end-to-end deployment test..." - - # Test VM connectivity - if ssh "$VM_NAME" "echo 'VM connectivity test passed'"; then - log_success "VM connectivity test passed" - else - log_error "VM connectivity test failed" - return 1 - fi - - # Test ThrillWiki service - if ssh "$VM_NAME" "curl -f http://localhost:8000 >/dev/null 2>&1"; then - log_success "ThrillWiki service test passed" - else - log_warning "ThrillWiki service test failed - checking logs..." - ssh "$VM_NAME" "journalctl -u thrillwiki --no-pager -l | tail -20" - fi - - # Test deployment script - log "Testing deployment script..." - ssh "$VM_NAME" "cd thrillwiki && ./scripts/vm-deploy.sh status" - - log_success "End-to-end test completed" -} - -# Generate final instructions -generate_instructions() { - log "Generating final setup instructions..." - - cat > "$PROJECT_DIR/UNRAID_SETUP_COMPLETE.md" << EOF -# ThrillWiki Unraid Automation - Setup Complete! 🎉 - -Your ThrillWiki CI/CD system has been fully automated and deployed! - -## VM Information -- **VM Name**: $VM_NAME -- **VM IP**: $VM_IP -- **SSH Access**: \`ssh $VM_NAME\` - -## Services Status -- **ThrillWiki Service**: Running on VM -- **Database**: PostgreSQL configured -- **Web Server**: Available at http://$VM_IP:8000 - -## Next Steps - -### 1. Start Webhook Listener -\`\`\`bash -./start-webhook.sh -\`\`\` - -### 2. Configure GitHub Webhook -- Go to your repository: $REPO_URL -- Settings → Webhooks → Add webhook -- **Payload URL**: http://YOUR_PUBLIC_IP:$WEBHOOK_PORT/webhook -- **Content type**: application/json -- **Secret**: (your webhook secret) -- **Events**: Just the push event - -### 3. Test the System -\`\`\`bash -# Test VM connection -ssh $VM_NAME - -# Test service status -ssh $VM_NAME "systemctl status thrillwiki" - -# Test manual deployment -ssh $VM_NAME "cd thrillwiki && ./scripts/vm-deploy.sh" - -# Make a test commit to trigger automatic deployment -git add . -git commit -m "Test automated deployment" -git push origin main -\`\`\` - -## Management Commands - -### VM Management -\`\`\`bash -# Check VM status -python3 scripts/unraid/vm-manager.py status - -# Start/stop VM -python3 scripts/unraid/vm-manager.py start -python3 scripts/unraid/vm-manager.py stop - -# Get VM IP -python3 scripts/unraid/vm-manager.py ip -\`\`\` - -### Service Management on VM -\`\`\`bash -# Check service status -ssh $VM_NAME "./scripts/vm-deploy.sh status" - -# Restart service -ssh $VM_NAME "./scripts/vm-deploy.sh restart" - -# View logs -ssh $VM_NAME "journalctl -u thrillwiki -f" -\`\`\` - -## Troubleshooting - -### Common Issues -1. **VM not accessible**: Check VM is running and has IP -2. **Service not starting**: Check logs with \`journalctl -u thrillwiki\` -3. **Webhook not working**: Verify port $WEBHOOK_PORT is open - -### Support Files -- Configuration: \`***REMOVED***.unraid\`, \`***REMOVED***.webhook\` -- Logs: \`logs/\` directory -- Documentation: \`docs/VM_DEPLOYMENT_SETUP.md\` - -**Your automated CI/CD system is now ready!** 🚀 - -Every push to the main branch will automatically deploy to your VM. -EOF - - log_success "Setup instructions saved to UNRAID_SETUP_COMPLETE.md" -} - -# Main automation function -main() { - log "🚀 Starting ThrillWiki Complete Unraid Automation" - echo "[AWS-SECRET-REMOVED]==========" - echo - - # Parse command line arguments - while [[ $# -gt 0 ]]; do - case $1 in - --reset) - RESET_ALL=true - shift - ;; - --reset-vm) - RESET_VM_ONLY=true - shift - ;; - --reset-config) - RESET_CONFIG_ONLY=true - shift - ;; - --help|-h) - show_help - exit 0 - ;; - *) - echo "Unknown option: $1" - show_help - exit 1 - ;; - esac - done - - # Create logs directory - mkdir -p "$LOG_DIR" - - # Handle reset modes - if [[ "$RESET_ALL" == "true" ]]; then - log "🔄 Complete reset mode - deleting VM and configuration" - echo - - # Load configuration first to get connection details for VM deletion - if [[ -f "$CONFIG_FILE" ]]; then - source "$CONFIG_FILE" - log_success "Loaded existing configuration for VM deletion" - else - log_warning "No configuration file found, will skip VM deletion" - fi - - # Delete existing VM if config exists - if [[ -f "$CONFIG_FILE" ]]; then - log "🗑️ Deleting existing VM..." - # Export environment variables for VM manager - set -a - source "$PROJECT_DIR/***REMOVED***.unraid" 2>/dev/null || true - set +a - - if python3 "$SCRIPT_DIR/vm-manager.py" delete; then - log_success "VM deleted successfully" - else - log "⚠️ VM deletion failed or VM didn't exist" - fi - fi - - # Remove configuration files - if [[ -f "$CONFIG_FILE" ]]; then - rm "$CONFIG_FILE" - log_success "Configuration file removed" - fi - - # Remove environment files - rm -f "$PROJECT_DIR/***REMOVED***.unraid" "$PROJECT_DIR/***REMOVED***.webhook" - log_success "Environment files removed" - - log_success "Complete reset finished - continuing with fresh setup" - echo - - elif [[ "$RESET_VM_ONLY" == "true" ]]; then - log "🔄 VM-only reset mode - deleting VM, preserving configuration" - echo - - # Load configuration to get connection details - if [[ -f "$CONFIG_FILE" ]]; then - source "$CONFIG_FILE" - log_success "Loaded existing configuration" - else - log_error "No configuration file found. Cannot reset VM without connection details." - echo " Run the script without reset flags first to create initial configuration." - exit 1 - fi - - # Delete existing VM - log "🗑️ Deleting existing VM..." - # Export environment variables for VM manager - set -a - source "$PROJECT_DIR/***REMOVED***.unraid" 2>/dev/null || true - set +a - - if python3 "$SCRIPT_DIR/vm-manager.py" delete; then - log_success "VM deleted successfully" - else - log "⚠️ VM deletion failed or VM didn't exist" - fi - - # Remove only environment files, keep main config - rm -f "$PROJECT_DIR/***REMOVED***.unraid" "$PROJECT_DIR/***REMOVED***.webhook" - log_success "Environment files removed, configuration preserved" - - log_success "VM reset complete - will recreate VM with saved configuration" - echo - - elif [[ "$RESET_CONFIG_ONLY" == "true" ]]; then - log "🔄 Config-only reset mode - deleting configuration, preserving VM" - echo - - # Remove configuration files - if [[ -f "$CONFIG_FILE" ]]; then - rm "$CONFIG_FILE" - log_success "Configuration file removed" - fi - - # Remove environment files - rm -f "$PROJECT_DIR/***REMOVED***.unraid" "$PROJECT_DIR/***REMOVED***.webhook" - log_success "Environment files removed" - - log_success "Configuration reset complete - will prompt for fresh configuration" - echo - fi - - # Collect configuration - prompt_unraid_config - - # Setup steps - setup_ssh_keys - setup_unraid_access - create_environment_files - install_dependencies - create_vm - wait_for_vm - configure_vm - start_services - setup_webhook_listener - test_deployment - generate_instructions - - echo - log_success "🎉 Complete automation setup finished!" - echo - log "Your ThrillWiki VM is running at: http://$VM_IP:8000" - log "Start the webhook listener: ./start-webhook.sh" - log "See UNRAID_SETUP_COMPLETE.md for detailed instructions" - echo - log "The system will now automatically deploy when you push to GitHub!" -} - -# Run main function and log output -main "$@" 2>&1 | tee "$LOG_DIR/unraid-automation.log" \ No newline at end of file diff --git a/shared/scripts/unraid/setup-ssh-key.sh b/shared/scripts/unraid/setup-ssh-key.sh deleted file mode 100755 index 6534caf4..00000000 --- a/shared/scripts/unraid/setup-ssh-key.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -# ThrillWiki Template VM SSH Key Setup Helper -# This script generates the SSH key needed for template VM access - -set -e - -# Colors for output -GREEN='\033[0;32m' -YELLOW='\033[1;33m' -BLUE='\033[0;34m' -NC='\033[0m' # No Color - -echo -e "${BLUE}ThrillWiki Template VM SSH Key Setup${NC}" -echo "[AWS-SECRET-REMOVED]" -echo - -SSH_KEY_PATH="$HOME/.ssh/thrillwiki_vm" - -# Generate SSH key if it doesn't exist -if [ ! -f "$SSH_KEY_PATH" ]; then - echo -e "${YELLOW}Generating new SSH key for ThrillWiki template VM...${NC}" - ssh-keygen -t rsa -b 4096 -f "$SSH_KEY_PATH" -N "" -C "thrillwiki-template-vm-access" - echo -e "${GREEN}✅ SSH key generated: $SSH_KEY_PATH${NC}" - echo -else - echo -e "${GREEN}✅ SSH key already exists: $SSH_KEY_PATH${NC}" - echo -fi - -# Display the public key -echo -e "${YELLOW}📋 Your SSH Public Key:${NC}" -echo "Copy this ENTIRE line and add it to your template VM:" -echo -echo -e "${GREEN}$(cat "$SSH_KEY_PATH.pub")${NC}" -echo - -# Instructions -echo -e "${BLUE}📝 Template VM Setup Instructions:${NC}" -echo "1. SSH into your template VM (thrillwiki-template-ubuntu)" -echo "2. Switch to the thrillwiki user:" -echo " sudo su - thrillwiki" -echo "3. Create .ssh directory and set permissions:" -echo " mkdir -p ~/.ssh && chmod 700 ~/.ssh" -echo "4. Add the public key above to ***REMOVED***:" -echo " echo 'YOUR_PUBLIC_KEY_HERE' >> ~/.ssh/***REMOVED***" -echo " chmod 600 ~/.ssh/***REMOVED***" -echo "5. Test SSH access:" -echo " ssh -i ~/.ssh/thrillwiki_vm thrillwiki@YOUR_TEMPLATE_VM_IP" -echo - -# SSH config helper -SSH_CONFIG="$HOME/.ssh/config" -echo -e "${BLUE}🔧 SSH Config Setup:${NC}" -if ! grep -q "thrillwiki-vm" "$SSH_CONFIG" 2>/dev/null; then - echo "Adding SSH config entry..." - cat >> "$SSH_CONFIG" << EOF - -# ThrillWiki Template VM -Host thrillwiki-vm - HostName %h - User thrillwiki - IdentityFile $SSH_KEY_PATH - StrictHostKeyChecking no - UserKnownHostsFile /dev/null -EOF - echo -e "${GREEN}✅ SSH config updated${NC}" -else - echo -e "${GREEN}✅ SSH config already contains thrillwiki-vm entry${NC}" -fi - -echo -echo -e "${GREEN}🎉 SSH key setup complete!${NC}" -echo "Next: Set up your template VM using TEMPLATE_VM_SETUP.md" -echo "Then run: ./setup-template-automation.sh" diff --git a/shared/scripts/unraid/setup-template-automation.sh b/shared/scripts/unraid/setup-template-automation.sh deleted file mode 100755 index df776b7e..00000000 --- a/shared/scripts/unraid/setup-template-automation.sh +++ /dev/null @@ -1,2262 +0,0 @@ -#!/bin/bash - -# ThrillWiki Template-Based Complete Unraid Automation Setup -# This script automates the entire template-based VM creation and deployment process on Unraid -# -# Usage: -# ./setup-template-automation.sh # Standard template-based setup -# ./setup-template-automation.sh --reset # Delete VM and config, start completely fresh -# ./setup-template-automation.sh --reset-vm # Delete VM only, keep configuration -# ./setup-template-automation.sh --reset-config # Delete config only, keep VM - -# Function to show help -show_help() { - echo "ThrillWiki Template-Based CI/CD Automation Setup" - echo "" - echo "This script sets up FAST template-based VM deployment using pre-configured Ubuntu templates." - echo "Template VMs deploy in 2-5 minutes instead of 20-30 minutes with autoinstall." - echo "" - echo "Usage:" - echo " $0 Set up or update ThrillWiki template automation" - echo " $0 -y Non-interactive mode, use saved configuration" - echo " $0 --reset Delete VM and config, start completely fresh" - echo " $0 --reset-vm Delete VM only, keep configuration" - echo " $0 --reset-config Delete config only, keep VM" - echo " $0 --help Show this help message" - echo "" - echo "Template Benefits:" - echo " ⚡ Speed: 2-5 min deployment vs 20-30 min with autoinstall" - echo " 🔒 Reliability: Pre-tested template eliminates installation failures" - echo " 💾 Efficiency: Copy-on-write disk format saves space" - echo "" - echo "Options:" - echo " -y, --yes Non-interactive mode - use saved configuration" - echo " and passwords without prompting. Requires existing" - echo " configuration file with saved settings." - echo "" - echo "Reset Options:" - echo " --reset Completely removes existing VM, disks, and config" - echo " before starting fresh template-based installation" - echo " --reset-vm Removes only the VM and disks, preserves saved" - echo " configuration to avoid re-entering settings" - echo " --reset-config Removes only the saved configuration, preserves" - echo " VM and prompts for fresh configuration input" - echo " --help Display this help and exit" - echo "" - echo "Examples:" - echo " $0 # Normal template-based setup/update" - echo " $0 -y # Non-interactive setup with saved config" - echo " $0 --reset # Complete fresh template installation" - echo " $0 --reset-vm # Fresh VM with saved settings" - echo " $0 --reset-config # Re-configure existing VM" - exit 0 -} - -# Check for help flag -if [[ "$1" == "--help" || "$1" == "-h" ]]; then - show_help -fi - -# Parse command line flags -RESET_ALL=false -RESET_VM_ONLY=false -RESET_CONFIG_ONLY=false -NON_INTERACTIVE=false - -# Process all arguments -while [[ $# -gt 0 ]]; do - case $1 in - -y|--yes) - NON_INTERACTIVE=true - echo "🤖 NON-INTERACTIVE MODE: Using saved configuration only" - shift - ;; - --reset) - RESET_ALL=true - echo "🔄 COMPLETE RESET MODE: Will delete VM and configuration" - shift - ;; - --reset-vm) - RESET_VM_ONLY=true - echo "🔄 VM RESET MODE: Will delete VM only, keep configuration" - shift - ;; - --reset-config) - RESET_CONFIG_ONLY=true - echo "🔄 CONFIG RESET MODE: Will delete configuration only, keep VM" - shift - ;; - --help|-h) - show_help - ;; - *) - echo "Unknown option: $1" - show_help - ;; - esac -done - -set -e - -# Colors for output -RED='\033[0;31m' -GREEN='\033[0;32m' -YELLOW='\033[1;33m' -BLUE='\033[0;34m' -CYAN='\033[0;36m' -NC='\033[0m' # No Color - -log() { - echo -e "${BLUE}[TEMPLATE-AUTOMATION]${NC} $1" -} - -log_success() { - echo -e "${GREEN}[SUCCESS]${NC} $1" -} - -log_warning() { - echo -e "${YELLOW}[WARNING]${NC} $1" -} - -log_error() { - echo -e "${RED}[ERROR]${NC} $1" -} - -log_template() { - echo -e "${CYAN}[TEMPLATE]${NC} $1" -} - -# Configuration -SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)" -LOG_DIR="$PROJECT_DIR/logs" - -# Default values -DEFAULT_UNRAID_HOST="" -DEFAULT_VM_NAME="thrillwiki-vm" -DEFAULT_VM_MEMORY="4096" -DEFAULT_VM_VCPUS="2" -DEFAULT_VM_DISK_SIZE="50" -DEFAULT_WEBHOOK_PORT="9000" -TEMPLATE_VM_NAME="thrillwiki-template-ubuntu" - -# Configuration files -CONFIG_FILE="$PROJECT_DIR/.thrillwiki-template-config" -TOKEN_FILE="$PROJECT_DIR/.thrillwiki-github-token" - -# Function to save configuration -save_config() { - log "Saving template configuration to $CONFIG_FILE..." - cat > "$CONFIG_FILE" << EOF -# ThrillWiki Template-Based Automation Configuration -# This file stores your settings to avoid re-entering them each time - -# Unraid Server Configuration -UNRAID_HOST="$UNRAID_HOST" -UNRAID_USER="$UNRAID_USER" -VM_NAME="$VM_NAME" -VM_MEMORY="$VM_MEMORY" -VM_VCPUS="$VM_VCPUS" -VM_DISK_SIZE="$VM_DISK_SIZE" - -# Template Configuration -TEMPLATE_VM_NAME="$TEMPLATE_VM_NAME" -DEPLOYMENT_TYPE="template-based" - -# Network Configuration -VM_IP="$VM_IP" -VM_GATEWAY="$VM_GATEWAY" -VM_NETMASK="$VM_NETMASK" -VM_NETWORK="$VM_NETWORK" - -# GitHub Configuration -REPO_URL="$REPO_URL" -GITHUB_USERNAME="$GITHUB_USERNAME" -GITHUB_API_ENABLED="$GITHUB_API_ENABLED" -GITHUB_AUTH_METHOD="$GITHUB_AUTH_METHOD" - -# Webhook Configuration -WEBHOOK_PORT="$WEBHOOK_PORT" -WEBHOOK_ENABLED="$WEBHOOK_ENABLED" - -# SSH Configuration (path to key, not the key content) -SSH_KEY_PATH="$HOME/.ssh/thrillwiki_vm" -EOF - - log_success "Template configuration saved to $CONFIG_FILE" -} - -# Function to save GitHub token securely - OVERWRITE THE OLD ONE COMPLETELY -save_github_token() { - if [ -n "$GITHUB_TOKEN" ]; then - log "🔒 OVERWRITING GitHub token (new token will REPLACE old one)..." - - # Force remove any existing token file first - rm -f "$TOKEN_FILE" 2>/dev/null || true - - # Write new token - this COMPLETELY OVERWRITES any old token - echo "$GITHUB_TOKEN" > "$TOKEN_FILE" - chmod 600 "$TOKEN_FILE" # Restrict to owner read/write only - - log_success "✅ NEW GitHub token saved securely (OLD TOKEN COMPLETELY REPLACED)" - log "Token file: $TOKEN_FILE" - else - log_error "No GITHUB_TOKEN to save!" - fi -} - -# Function to load GitHub token -load_github_token() { - if [ -f "$TOKEN_FILE" ]; then - GITHUB_TOKEN=$(cat "$TOKEN_FILE") - if [ -n "$GITHUB_TOKEN" ]; then - log "🔓 Loaded saved GitHub token for reuse" - return 0 - fi - fi - return 1 -} - -# Function to load configuration -load_config() { - if [ -f "$CONFIG_FILE" ]; then - log "Loading existing template configuration from $CONFIG_FILE..." - source "$CONFIG_FILE" - return 0 - else - return 1 - fi -} - -# Function for non-interactive configuration loading -load_non_interactive_config() { - log "=== Non-Interactive Template Configuration Loading ===" - - # Load saved configuration - if ! load_config; then - log_error "No saved template configuration found. Cannot run in non-interactive mode." - log_error "Please run the script without -y flag first to create initial configuration." - exit 1 - fi - - log_success "Loaded saved template configuration successfully" - - # Check for required environment variables for passwords - if [ -z "${UNRAID_PASSWORD:-}" ]; then - log_error "UNRAID_PASSWORD environment variable not set." - log_error "For non-interactive mode, set: export UNRAID_PASSWORD='your_password'" - exit 1 - fi - - # Handle GitHub authentication based on saved method - if [ -n "$GITHUB_USERNAME" ] && [ "$GITHUB_API_ENABLED" = "true" ]; then - # Personal access token method - try authentication script first - log "Attempting to get PAT token from authentication script..." - if GITHUB_TOKEN=$(python3 "$SCRIPT_DIR/../github-auth.py" token 2>/dev/null) && [ -n "$GITHUB_TOKEN" ]; then - log_success "Token obtained from authentication script" - elif [ -n "${GITHUB_TOKEN:-}" ]; then - log "Using token from environment variable" - else - log_error "No GitHub PAT token available. Either:" - log_error "1. Run setup interactively to configure token" - log_error "2. Set GITHUB_TOKEN environment variable: export GITHUB_TOKEN='your_token'" - exit 1 - fi - fi - - # Handle webhook secret - if [ "$WEBHOOK_ENABLED" = "true" ]; then - if [ -z "${WEBHOOK_SECRET:-}" ]; then - log_error "WEBHOOK_SECRET environment variable not set." - log_error "For non-interactive mode, set: export WEBHOOK_SECRET='your_secret'" - exit 1 - fi - fi - - log_success "All required credentials loaded from environment variables" - log "Template configuration summary:" - echo " Unraid Host: $UNRAID_HOST" - echo " VM Name: $VM_NAME" - echo " Template VM: $TEMPLATE_VM_NAME" - echo " VM IP: $VM_IP" - echo " Repository: $REPO_URL" - echo " GitHub Auth: $GITHUB_AUTH_METHOD" - echo " Webhook Enabled: $WEBHOOK_ENABLED" - echo " Deployment Type: template-based ⚡" -} -# Function to stop and clean up existing VM before reset -stop_existing_vm_for_reset() { - local vm_name="$1" - local unraid_host="$2" - local unraid_user="$3" - - if [ -z "$vm_name" ] || [ -z "$unraid_host" ] || [ -z "$unraid_user" ]; then - log_warning "Missing VM connection details for VM shutdown" - log "VM Name: ${vm_name:-'not set'}" - log "Unraid Host: ${unraid_host:-'not set'}" - log "Unraid User: ${unraid_user:-'not set'}" - return 0 - fi - - log "🔍 Checking if VM '$vm_name' exists and needs to be stopped..." - - # Test connection first - if ! ssh -o ConnectTimeout=10 "$unraid_user@$unraid_host" "echo 'Connected'" > /dev/null 2>&1; then - log_warning "Cannot connect to Unraid server at $unraid_host - skipping VM shutdown" - return 0 - fi - - # Check VM status - local vm_status=$(ssh "$unraid_user@$unraid_host" "virsh domstate $vm_name 2>/dev/null || echo 'not defined'") - - if [ "$vm_status" = "not defined" ]; then - log "VM '$vm_name' does not exist - no need to stop" - return 0 - elif [ "$vm_status" = "shut off" ]; then - log "VM '$vm_name' is already stopped - good for reset" - return 0 - elif [ "$vm_status" = "running" ]; then - log_warning "⚠️ VM '$vm_name' is currently RUNNING!" - log_warning "VM must be stopped before reset to avoid conflicts." - echo - - if [ "$NON_INTERACTIVE" = "true" ]; then - log "Non-interactive mode: Automatically stopping VM..." - stop_choice="y" - else - echo "Options:" - echo " 1. Stop the VM gracefully before reset (recommended)" - echo " 2. Force stop the VM before reset" - echo " 3. Skip VM shutdown (may cause issues)" - echo " 4. Cancel reset" - echo - read -p "What would you like to do? (1-4): " stop_choice - fi - - case $stop_choice in - 1|y|Y) - log "Stopping VM '$vm_name' gracefully before reset..." - - # Try graceful shutdown first - log "Attempting graceful shutdown..." - if ssh "$unraid_user@$unraid_host" "virsh shutdown $vm_name"; then - log "Shutdown command sent, waiting for VM to stop..." - - # Wait up to 60 seconds for graceful shutdown - local wait_count=0 - local max_wait=12 # 60 seconds (12 * 5 seconds) - - while [ $wait_count -lt $max_wait ]; do - sleep 5 - local current_status=$(ssh "$unraid_user@$unraid_host" "virsh domstate $vm_name 2>/dev/null || echo 'not defined'") - - if [ "$current_status" != "running" ]; then - log_success "✅ VM '$vm_name' stopped gracefully (status: $current_status)" - return 0 - fi - - ((wait_count++)) - log "Waiting for graceful shutdown... ($((wait_count * 5))s)" - done - - # If graceful shutdown didn't work, ask about force stop - log_warning "Graceful shutdown took too long. VM is still running." - - if [ "$NON_INTERACTIVE" = "true" ]; then - log "Non-interactive mode: Force stopping VM..." - force_choice="y" - else - echo - read -p "Force stop the VM? (y/n): " force_choice - fi - - if [ "$force_choice" = "y" ] || [ "$force_choice" = "Y" ]; then - log "Force stopping VM '$vm_name'..." - if ssh "$unraid_user@$unraid_host" "virsh destroy $vm_name"; then - log_success "✅ VM '$vm_name' force stopped" - return 0 - else - log_error "❌ Failed to force stop VM" - return 1 - fi - else - log_error "VM is still running. Cannot proceed safely with reset." - return 1 - fi - else - log_error "❌ Failed to send shutdown command to VM" - return 1 - fi - ;; - 2) - log "Force stopping VM '$vm_name' before reset..." - if ssh "$unraid_user@$unraid_host" "virsh destroy $vm_name"; then - log_success "✅ VM '$vm_name' force stopped" - return 0 - else - log_error "❌ Failed to force stop VM" - return 1 - fi - ;; - 3) - log_warning "⚠️ Continuing with running VM (NOT RECOMMENDED)" - log_warning "This may cause conflicts during VM recreation!" - return 0 - ;; - 4|n|N|"") - log "VM reset cancelled by user" - exit 0 - ;; - *) - log_error "Invalid choice. Please select 1, 2, 3, or 4." - return 1 - ;; - esac - else - log "VM '$vm_name' status: $vm_status - continuing with reset" - return 0 - fi -} - -# Function to gracefully stop template VM if running -stop_template_vm_if_running() { - local template_status=$(ssh "$UNRAID_USER@$UNRAID_HOST" "virsh domstate $TEMPLATE_VM_NAME 2>/dev/null || echo 'not defined'") - - if [ "$template_status" = "running" ]; then - log_warning "⚠️ Template VM '$TEMPLATE_VM_NAME' is currently RUNNING!" - log_warning "Template VMs must be stopped to create new instances safely." - echo - - if [ "$NON_INTERACTIVE" = "true" ]; then - log "Non-interactive mode: Automatically stopping template VM..." - stop_choice="y" - else - echo "Options:" - echo " 1. Stop the template VM gracefully (recommended)" - echo " 2. Continue anyway (may cause issues)" - echo " 3. Cancel setup" - echo - read -p "What would you like to do? (1/2/3): " stop_choice - fi - - case $stop_choice in - 1|y|Y) - log "Stopping template VM gracefully..." - - # Try graceful shutdown first - log "Attempting graceful shutdown..." - if ssh "$UNRAID_USER@$UNRAID_HOST" "virsh shutdown $TEMPLATE_VM_NAME"; then - log "Shutdown command sent, waiting for VM to stop..." - - # Wait up to 60 seconds for graceful shutdown - local wait_count=0 - local max_wait=12 # 60 seconds (12 * 5 seconds) - - while [ $wait_count -lt $max_wait ]; do - sleep 5 - local current_status=$(ssh "$UNRAID_USER@$UNRAID_HOST" "virsh domstate $TEMPLATE_VM_NAME 2>/dev/null || echo 'not defined'") - - if [ "$current_status" != "running" ]; then - log_success "✅ Template VM stopped gracefully (status: $current_status)" - return 0 - fi - - ((wait_count++)) - log "Waiting for graceful shutdown... ($((wait_count * 5))s)" - done - - # If graceful shutdown didn't work, ask about force stop - log_warning "Graceful shutdown took too long. Template VM is still running." - - if [ "$NON_INTERACTIVE" = "true" ]; then - log "Non-interactive mode: Force stopping template VM..." - force_choice="y" - else - echo - read -p "Force stop the template VM? (y/n): " force_choice - fi - - if [ "$force_choice" = "y" ] || [ "$force_choice" = "Y" ]; then - log "Force stopping template VM..." - if ssh "$UNRAID_USER@$UNRAID_HOST" "virsh destroy $TEMPLATE_VM_NAME"; then - log_success "✅ Template VM force stopped" - return 0 - else - log_error "❌ Failed to force stop template VM" - return 1 - fi - else - log_error "Template VM is still running. Cannot proceed safely." - return 1 - fi - else - log_error "❌ Failed to send shutdown command to template VM" - return 1 - fi - ;; - 2) - log_warning "⚠️ Continuing with running template VM (NOT RECOMMENDED)" - log_warning "This may cause disk corruption or deployment issues!" - return 0 - ;; - 3|n|N|"") - log "Setup cancelled by user" - exit 0 - ;; - *) - log_error "Invalid choice. Please select 1, 2, or 3." - return 1 - ;; - esac - fi - - return 0 -} - -# Function to check template VM availability -check_template_vm() { - log_template "Checking template VM availability..." - - # Test connection first - if ! ssh -o ConnectTimeout=10 "$UNRAID_USER@$UNRAID_HOST" "echo 'Connected'" > /dev/null 2>&1; then - log_error "Cannot connect to Unraid server at $UNRAID_HOST" - log_error "Please verify:" - log_error "1. Unraid server IP address is correct" - log_error "2. SSH key authentication is set up" - log_error "3. Network connectivity" - return 1 - fi - - # Check if template VM disk exists - if ssh "$UNRAID_USER@$UNRAID_HOST" "test -f /mnt/user/domains/$TEMPLATE_VM_NAME/vdisk1.qcow2"; then - log_template "✅ Template VM disk found: /mnt/user/domains/$TEMPLATE_VM_NAME/vdisk1.qcow2" - - # Get template info - template_info=$(ssh "$UNRAID_USER@$UNRAID_HOST" "qemu-img info /mnt/user/domains/$TEMPLATE_VM_NAME/vdisk1.qcow2 | grep 'virtual size' || echo 'Size info not available'") - log_template "📋 Template info: $template_info" - - # Check and handle template VM status - template_status=$(ssh "$UNRAID_USER@$UNRAID_HOST" "virsh domstate $TEMPLATE_VM_NAME 2>/dev/null || echo 'not defined'") - - if [ "$template_status" = "running" ]; then - log_template "Template VM status: $template_status (needs to be stopped)" - - # Stop the template VM if running - if ! stop_template_vm_if_running; then - log_error "Failed to stop template VM. Cannot proceed safely." - return 1 - fi - else - log_template "✅ Template VM status: $template_status (good for template use)" - fi - - return 0 - else - log_error "❌ Template VM disk not found!" - log_error "Expected location: /mnt/user/domains/$TEMPLATE_VM_NAME/vdisk1.qcow2" - log_error "" - log_error "To create the template VM:" - log_error "1. Create a VM named '$TEMPLATE_VM_NAME' on your Unraid server" - log_error "2. Install Ubuntu 24.04 LTS with required packages" - log_error "3. Configure it with Python, PostgreSQL, Nginx, etc." - log_error "4. Shut it down to use as a template" - log_error "" - log_error "See README-template-deployment.md for detailed setup instructions" - return 1 - fi -} - -# Function to prompt for configuration -prompt_template_config() { - # In non-interactive mode, use saved config only - if [ "$NON_INTERACTIVE" = "true" ]; then - load_non_interactive_config - return 0 - fi - - log "=== ThrillWiki Template-Based VM Configuration ===" - echo - log_template "🚀 This setup uses TEMPLATE-BASED deployment for ultra-fast VM creation!" - echo - - # Try to load existing config first - if load_config; then - log_success "Loaded existing template configuration" - echo "Current settings:" - echo " Unraid Host: $UNRAID_HOST" - echo " VM Name: $VM_NAME" - echo " Template VM: $TEMPLATE_VM_NAME" - echo " VM IP: $VM_IP" - echo " Repository: $REPO_URL" - echo " Deployment: template-based ⚡" - echo - read -p "Use existing configuration? (y/n): " use_existing - if [ "$use_existing" = "y" ] || [ "$use_existing" = "Y" ]; then - # Still need to get sensitive info that we don't save - read -s -p "Enter Unraid [PASSWORD-REMOVED] - echo - - # Handle GitHub authentication based on saved method - if [ -n "$GITHUB_USERNAME" ] && [ "$GITHUB_API_ENABLED" = "true" ]; then - # Try different sources for the token in order of preference - log "Loading GitHub PAT token..." - - # 1. Try authentication script first - if GITHUB_TOKEN=$(python3 "$SCRIPT_DIR/../github-auth.py" token 2>/dev/null) && [ -n "$GITHUB_TOKEN" ]; then - log_success "Token obtained from authentication script" - log "Using existing PAT token from authentication script" - - # Validate token and repository access immediately - log "🔍 Validating GitHub token and repository access..." - if ! validate_github_access; then - log_error "❌ GitHub token validation failed. Please check your token and repository access." - log "Please try entering a new token or check your repository URL." - return 1 - fi - - # 2. Try saved token file - elif load_github_token; then - log_success "Token loaded from secure storage (reusing for VM reset)" - - # Validate token and repository access immediately - log "🔍 Validating GitHub token and repository access..." - if ! validate_github_access; then - log_error "❌ GitHub token validation failed. Please check your token and repository access." - log "Please try entering a new token or check your repository URL." - return 1 - fi - - else - log "No token found in authentication script or saved storage" - read -s -p "Enter GitHub personal access token: " GITHUB_TOKEN - echo - - # Validate the new token immediately - if [ -n "$GITHUB_TOKEN" ]; then - log "🔍 Validating new GitHub token..." - if ! validate_github_access; then - log_error "❌ GitHub token validation failed. Please check your token and repository access." - log "Please try running the setup again with a valid token." - return 1 - fi - fi - - # Save the new token for future VM resets - save_github_token - fi - fi - - if [ "$WEBHOOK_ENABLED" = "true" ]; then - read -s -p "Enter GitHub webhook secret: " WEBHOOK_SECRET - echo - fi - - # Check template VM before proceeding - if ! check_template_vm; then - log_error "Template VM check failed. Please set up your template VM first." - exit 1 - fi - - return 0 - fi - fi - - # Prompt for new configuration - read -p "Enter your Unraid server IP address: " UNRAID_HOST - - read -p "Enter Unraid username (default: root): " UNRAID_USER - UNRAID_USER=${UNRAID_USER:-root} - - read -s -p "Enter Unraid [PASSWORD-REMOVED] - echo - # Note: Password not saved for security - - # Check template VM availability early - log_template "Verifying template VM setup..." - if ! check_template_vm; then - log_error "Template VM setup is required before proceeding." - echo - read -p "Do you want to continue setup anyway? (y/n): " continue_anyway - if [ "$continue_anyway" != "y" ] && [ "$continue_anyway" != "Y" ]; then - log "Setup cancelled. Please set up your template VM first." - log "See README-template-deployment.md for instructions." - exit 1 - fi - log_warning "Continuing setup without verified template VM..." - else - log_success "Template VM verified and ready!" - fi - - read -p "Enter VM name (default: $DEFAULT_VM_NAME): " VM_NAME - VM_NAME=${VM_NAME:-$DEFAULT_VM_NAME} - - read -p "Enter VM memory in MB (default: $DEFAULT_VM_MEMORY): " VM_MEMORY - VM_MEMORY=${VM_MEMORY:-$DEFAULT_VM_MEMORY} - - read -p "Enter VM vCPUs (default: $DEFAULT_VM_VCPUS): " VM_VCPUS - VM_VCPUS=${VM_VCPUS:-$DEFAULT_VM_VCPUS} - - read -p "Enter VM disk size in GB (default: $DEFAULT_VM_DISK_SIZE): " VM_DISK_SIZE - VM_DISK_SIZE=${VM_DISK_SIZE:-$DEFAULT_VM_DISK_SIZE} - - # Template VM name (usually fixed) - read -p "Enter template VM name (default: $TEMPLATE_VM_NAME): " TEMPLATE_VM_NAME_INPUT - TEMPLATE_VM_NAME=${TEMPLATE_VM_NAME_INPUT:-$TEMPLATE_VM_NAME} - - read -p "Enter GitHub repository URL: " REPO_URL - - # GitHub API Configuration - PAT Only - echo - log "=== GitHub Personal Access Token Configuration ===" - echo "This setup requires a GitHub Personal Access Token (PAT) for repository access." - echo "Both classic tokens and fine-grained tokens are supported." - echo "" - echo "Required token permissions:" - echo " - Repository access (read/write)" - echo " - Contents (read/write)" - echo " - Metadata (read)" - echo "" - - # Try to get token from authentication script first - log "Checking for existing GitHub token..." - if GITHUB_TOKEN=$(python3 "$SCRIPT_DIR/../github-auth.py" token 2>/dev/null) && [ -n "$GITHUB_TOKEN" ]; then - # Get username from authentication script if possible - if GITHUB_USERNAME=$(python3 "$SCRIPT_DIR/../github-auth.py" whoami 2>/dev/null | grep "You are authenticated as:" | cut -d: -f2 | xargs) && [ -n "$GITHUB_USERNAME" ]; then - log_success "Found existing token and username from authentication script" - echo "Username: $GITHUB_USERNAME" - echo "Token: ${GITHUB_TOKEN:0:8}... (masked)" - echo - read -p "Use this existing token? (y/n): " use_existing_token - - if [ "$use_existing_token" != "y" ] && [ "$use_existing_token" != "Y" ]; then - GITHUB_TOKEN="" - GITHUB_USERNAME="" - fi - else - log "Found token but no username, need to get username..." - read -p "Enter GitHub username: " GITHUB_USERNAME - fi - fi - - # If no token found or user chose not to use existing, prompt for manual entry - if [ -z "$GITHUB_TOKEN" ]; then - log "Enter your GitHub credentials manually:" - read -p "Enter GitHub username: " GITHUB_USERNAME - read -s -p "Enter GitHub Personal Access Token (classic or fine-grained): " GITHUB_TOKEN - echo - fi - - # Validate that we have both username and token - if [ -n "$GITHUB_USERNAME" ] && [ -n "$GITHUB_TOKEN" ]; then - GITHUB_API_ENABLED=true - GITHUB_AUTH_METHOD="token" - log_success "Personal access token configured for user: $GITHUB_USERNAME" - - # Test the token quickly - log "Testing GitHub token access..." - if curl -sf -H "Authorization: token $GITHUB_TOKEN" https://api.github.com/user >/dev/null 2>&1; then - log_success "✅ GitHub token validated successfully" - else - log_warning "⚠️ Could not validate GitHub token (API may be rate-limited)" - log "Proceeding anyway - token will be tested during repository operations" - fi - else - log_error "Both username and token are required for GitHub access" - log_error "Repository cloning and auto-pull functionality will not work without proper authentication" - exit 1 - fi - - # Webhook Configuration - echo - read -s -p "Enter GitHub webhook secret (optional, press Enter to skip): " WEBHOOK_SECRET - echo - - # If no webhook secret provided, disable webhook functionality - if [ -z "$WEBHOOK_SECRET" ]; then - log "No webhook secret provided - webhook functionality will be disabled" - WEBHOOK_ENABLED=false - else - WEBHOOK_ENABLED=true - fi - - read -p "Enter webhook port (default: $DEFAULT_WEBHOOK_PORT): " WEBHOOK_PORT - WEBHOOK_PORT=${WEBHOOK_PORT:-$DEFAULT_WEBHOOK_PORT} - - # Get VM network configuration preference - echo - log "=== Network Configuration ===" - echo "Choose network configuration method:" - echo "1. DHCP (automatic IP assignment - recommended)" - echo "2. Static IP (manual IP configuration)" - - while true; do - read -p "Select option (1-2): " network_choice - case $network_choice in - 1) - log "Using DHCP network configuration..." - VM_IP="dhcp" - VM_GATEWAY="192.168.20.1" - VM_NETMASK="255.255.255.0" - VM_NETWORK="192.168.20.0/24" - NETWORK_MODE="dhcp" - break - ;; - 2) - log "Using static IP network configuration..." - # Get VM IP address with proper range validation - while true; do - read -p "Enter VM IP address (192.168.20.10-192.168.20.100): " VM_IP - if [[ "$VM_IP" =~ ^192\.168\.20\.([1-9][0-9]|100)$ ]]; then - local ip_last_octet="${BASH_REMATCH[1]}" - if [ "$ip_last_octet" -ge 10 ] && [ "$ip_last_octet" -le 100 ]; then - break - fi - fi - echo "Invalid IP address. Please enter an IP in the range 192.168.20.10-192.168.20.100" - done - VM_GATEWAY="192.168.20.1" - VM_NETMASK="255.255.255.0" - VM_NETWORK="192.168.20.0/24" - NETWORK_MODE="static" - break - ;; - *) - echo "Invalid option. Please select 1 or 2." - ;; - esac - done - - # Save configuration and GitHub token - save_config - save_github_token # Save token for VM resets - log_success "Template configuration saved - setup complete!" -} - -# Function to update SSH config with actual VM IP address -update_ssh_config_with_ip() { - local vm_name="$1" - local vm_ip="$2" - local ssh_config_path="$HOME/.ssh/config" - - log "Updating SSH config with actual IP: $vm_ip" - - # Check if SSH config exists and has our VM entry - if [ -f "$ssh_config_path" ] && grep -q "Host $vm_name" "$ssh_config_path"; then - # Update the HostName to use actual IP instead of %h placeholder - if grep -A 10 "Host $vm_name" "$ssh_config_path" | grep -q "HostName %h"; then - # Replace %h with actual IP - sed -i.bak "/Host $vm_name/,/^Host\|^$/s/HostName %h/HostName $vm_ip/" "$ssh_config_path" - log_success "SSH config updated: $vm_name now points to $vm_ip" - elif grep -A 10 "Host $vm_name" "$ssh_config_path" | grep -q "HostName "; then - # Update existing IP - sed -i.bak "/Host $vm_name/,/^Host\|^$/s/HostName .*/HostName $vm_ip/" "$ssh_config_path" - log_success "SSH config updated: $vm_name IP changed to $vm_ip" - else - # Add HostName line after Host line - sed -i.bak "/Host $vm_name/a\\ - HostName $vm_ip" "$ssh_config_path" - log_success "SSH config updated: Added IP $vm_ip for $vm_name" - fi - - # Show the updated config section - log "Updated SSH config for $vm_name:" - grep -A 6 "Host $vm_name" "$ssh_config_path" | head -7 - else - log_warning "SSH config entry for $vm_name not found, cannot update IP" - fi -} - -# Generate SSH keys for VM access -setup_ssh_keys() { - log "Setting up SSH keys for template VM access..." - - local ssh_key_path="$HOME/.ssh/thrillwiki_vm" - local ssh_config_path="$HOME/.ssh/config" - - if [ ! -f "$ssh_key_path" ]; then - ssh-keygen -t rsa -b 4096 -f "$ssh_key_path" -N "" -C "thrillwiki-template-vm-access" - log_success "SSH key generated: $ssh_key_path" - else - log "SSH key already exists: $ssh_key_path" - fi - - # Add SSH config entry - if ! grep -q "Host $VM_NAME" "$ssh_config_path" 2>/dev/null; then - cat >> "$ssh_config_path" << EOF - -# ThrillWiki Template VM -Host $VM_NAME - HostName %h - User thrillwiki - IdentityFile $ssh_key_path - StrictHostKeyChecking no - UserKnownHostsFile /dev/null -EOF - log_success "SSH config updated for template VM" - fi - - # Store public key for VM setup - SSH_PUBLIC_KEY=$(cat "$ssh_key_path.pub") - export SSH_PUBLIC_KEY -} - -# Setup Unraid host access -setup_unraid_access() { - log "Setting up Unraid server access..." - - local unraid_key_path="$HOME/.ssh/unraid_access" - - if [ ! -f "$unraid_key_path" ]; then - ssh-keygen -t rsa -b 4096 -f "$unraid_key_path" -N "" -C "unraid-template-access" - - log "Please add this public key to your Unraid server:" - echo "---" - cat "$unraid_key_path.pub" - echo "---" - echo - log "Add this to /root/.ssh/***REMOVED*** on your Unraid server" - read -p "Press Enter when you've added the key..." - fi - - # Test Unraid connection - log "Testing Unraid connection..." - if ssh -i "$unraid_key_path" -o ConnectTimeout=5 -o StrictHostKeyChecking=no "$UNRAID_USER@$UNRAID_HOST" "echo 'Connected to Unraid successfully'"; then - log_success "Unraid connection test passed" - else - log_error "Unraid connection test failed" - exit 1 - fi - - # Update SSH config for Unraid - if ! grep -q "Host unraid" "$HOME/.ssh/config" 2>/dev/null; then - cat >> "$HOME/.ssh/config" << EOF - -# Unraid Server -Host unraid - HostName $UNRAID_HOST - User $UNRAID_USER - IdentityFile $unraid_key_path - StrictHostKeyChecking no -EOF - fi -} - -# Create environment files for template deployment -create_environment_files() { - log "Creating template deployment environment files..." - log "🔄 NEW TOKEN WILL BE WRITTEN TO ALL ENVIRONMENT FILES (overwriting any old tokens)" - - # Force remove old environment files first - rm -f "$PROJECT_DIR/***REMOVED***.unraid" "$PROJECT_DIR/***REMOVED***.webhook" 2>/dev/null || true - - # Get SSH public key content safely - local ssh_key_path="$HOME/.ssh/thrillwiki_vm.pub" - local ssh_public_key="" - if [ -f "$ssh_key_path" ]; then - ssh_public_key=$(cat "$ssh_key_path") - fi - - # Template-based Unraid VM environment - COMPLETELY NEW FILE WITH NEW TOKEN - cat > "$PROJECT_DIR/***REMOVED***.unraid" << EOF -# ThrillWiki Template-Based VM Configuration -UNRAID_HOST=$UNRAID_HOST -UNRAID_USER=$UNRAID_USER -UNRAID_PASSWORD=$UNRAID_PASSWORD -VM_NAME=$VM_NAME -VM_MEMORY=$VM_MEMORY -VM_VCPUS=$VM_VCPUS -VM_DISK_SIZE=$VM_DISK_SIZE -SSH_PUBLIC_KEY="$ssh_public_key" - -# Template Configuration -TEMPLATE_VM_NAME=$TEMPLATE_VM_NAME -DEPLOYMENT_TYPE=template-based - -# Network Configuration -VM_IP=$VM_IP -VM_GATEWAY=$VM_GATEWAY -VM_NETMASK=$VM_NETMASK -VM_NETWORK=$VM_NETWORK - -# GitHub Configuration -REPO_URL=$REPO_URL -GITHUB_USERNAME=$GITHUB_USERNAME -GITHUB_TOKEN=$GITHUB_TOKEN -GITHUB_API_ENABLED=$GITHUB_API_ENABLED -EOF - - # Webhook environment (updated with VM info) - cat > "$PROJECT_DIR/***REMOVED***.webhook" << EOF -# ThrillWiki Template-Based Webhook Configuration -WEBHOOK_PORT=$WEBHOOK_PORT -WEBHOOK_SECRET=$WEBHOOK_SECRET -WEBHOOK_ENABLED=$WEBHOOK_ENABLED -VM_HOST=$VM_IP -VM_PORT=22 -VM_USER=thrillwiki -VM_KEY_PATH=$HOME/.ssh/thrillwiki_vm -VM_PROJECT_PATH=/home/thrillwiki/thrillwiki -REPO_URL=$REPO_URL -DEPLOY_BRANCH=main - -# Template Configuration -TEMPLATE_VM_NAME=$TEMPLATE_VM_NAME -DEPLOYMENT_TYPE=template-based - -# GitHub API Configuration -GITHUB_USERNAME=$GITHUB_USERNAME -GITHUB_TOKEN=$GITHUB_TOKEN -GITHUB_API_ENABLED=$GITHUB_API_ENABLED -EOF - - log_success "Template deployment environment files created" -} - -# Install required tools -install_dependencies() { - log "Installing required dependencies for template deployment..." - - # Check for required tools - local missing_tools=() - local mac_tools=() - - command -v python3 >/dev/null 2>&1 || missing_tools+=("python3") - command -v ssh >/dev/null 2>&1 || missing_tools+=("openssh-client") - command -v scp >/dev/null 2>&1 || missing_tools+=("openssh-client") - - # Install missing tools based on platform - if [ ${#missing_tools[@]} -gt 0 ]; then - log "Installing missing tools: ${missing_tools[*]}" - - if command -v apt-get >/dev/null 2>&1; then - sudo apt-get update - sudo apt-get install -y "${missing_tools[@]}" - elif command -v yum >/dev/null 2>&1; then - sudo yum install -y "${missing_tools[@]}" - elif command -v dnf >/dev/null 2>&1; then - sudo dnf install -y "${missing_tools[@]}" - elif command -v brew >/dev/null 2>&1; then - # macOS with Homebrew - for tool in "${missing_tools[@]}"; do - case $tool in - python3) brew install python3 ;; - openssh-client) log "OpenSSH should be available on macOS" ;; - esac - done - else - log_error "Package manager not found. Please install: ${missing_tools[*]}" - exit 1 - fi - fi - - # Install Python dependencies - if [ -f "$PROJECT_DIR/pyproject.toml" ]; then - log "Installing Python dependencies with UV..." - if ! command -v uv >/dev/null 2>&1; then - curl -LsSf https://astral.sh/uv/install.sh | sh - source ~/.cargo/env - fi - cd "$PROJECT_DIR" - uv sync - fi - - log_success "Dependencies installed for template deployment" -} - -# Create VM using the template-based VM manager -create_template_vm() { - log "Creating VM from template on Unraid server..." - - # Export all environment variables from the file - set -a # automatically export all variables - source "$PROJECT_DIR/***REMOVED***.unraid" - set +a # turn off automatic export - - # Run template-based VM setup - cd "$PROJECT_DIR" - python3 scripts/unraid/main_template.py setup - - if [ $? -eq 0 ]; then - log_success "Template-based VM setup completed successfully ⚡" - log_template "VM deployed in minutes instead of 30+ minutes!" - else - log_error "Template-based VM setup failed" - exit 1 - fi -} - -# Wait for template VM to be ready and get IP -wait_for_template_vm() { - log "🔍 Getting VM IP address from guest agent..." - log_template "Template VMs should get IP immediately via guest agent!" - - # Export all environment variables from the file - set -a # automatically export all variables - source "$PROJECT_DIR/***REMOVED***.unraid" - set +a # turn off automatic export - - # Check for IP immediately - template VMs should have guest agent running - local max_attempts=12 # 3 minutes max wait (much shorter) - local attempt=1 - - log "🔍 Phase 1: Checking guest agent for IP address..." - - while [ $attempt -le $max_attempts ]; do - log "🔍 Attempt $attempt/$max_attempts: Querying guest agent on VM '$VM_NAME'..." - - # Add timeout to the IP detection to prevent hanging - VM_IP_RESULT="" - VM_IP="" - - # Use timeout command to prevent hanging (30 seconds max per attempt) - if command -v timeout >/dev/null 2>&1; then - VM_IP_RESULT=$(timeout 30 python3 scripts/unraid/main_template.py ip 2>&1 || echo "TIMEOUT") - elif command -v gtimeout >/dev/null 2>&1; then - # macOS with coreutils installed - VM_IP_RESULT=$(gtimeout 30 python3 scripts/unraid/main_template.py ip 2>&1 || echo "TIMEOUT") - else - # Fallback for systems without timeout command - use background process with kill - log "⚠️ No timeout command available, using background process method..." - VM_IP_RESULT=$(python3 scripts/unraid/main_template.py ip 2>&1 & - PID=$! - ( - sleep 30 - if kill -0 $PID 2>/dev/null; then - kill $PID 2>/dev/null - echo "TIMEOUT" - fi - ) & - wait $PID 2>/dev/null || echo "TIMEOUT") - fi - - # Check if we got a timeout - if echo "$VM_IP_RESULT" | grep -q "TIMEOUT"; then - log "⚠️ IP detection timed out after 30 seconds - guest agent may not be ready" - elif [ -n "$VM_IP_RESULT" ]; then - # Show what we got from the query - log "📝 Guest agent response: $(echo "$VM_IP_RESULT" | head -1)" - - # Extract IP from successful response - VM_IP=$(echo "$VM_IP_RESULT" | grep "VM IP:" | cut -d' ' -f3) - else - log "⚠️ No response from guest agent query" - fi - - if [ -n "$VM_IP" ] && [ "$VM_IP" != "None" ] && [ "$VM_IP" != "null" ] && [ "$VM_IP" != "TIMEOUT" ]; then - log_success "✅ Template VM got IP address: $VM_IP ⚡" - - # Update SSH config with actual IP - update_ssh_config_with_ip "$VM_NAME" "$VM_IP" - - # Update webhook environment with IP - sed -i.bak "s/VM_HOST=$VM_NAME/VM_HOST=$VM_IP/" "$PROJECT_DIR/***REMOVED***.webhook" - - break - fi - - # Much shorter wait time since template VMs should be fast - if [ $attempt -le 3 ]; then - log "⏳ No IP yet, waiting 5 seconds... (VM may still be booting)" - sleep 5 # Very short wait for first few attempts - else - log "⏳ Still waiting for IP... ($(($attempt * 15))s elapsed, checking every 15s)" - - # Show VM status to help debug - also with timeout - log "🔍 Checking VM status for debugging..." - if command -v timeout >/dev/null 2>&1; then - VM_STATUS=$(timeout 15 python3 scripts/unraid/main_template.py status 2>&1 | head -1 || echo "Status check timed out") - else - VM_STATUS=$(python3 scripts/unraid/main_template.py status 2>&1 | head -1) - fi - - if [ -n "$VM_STATUS" ]; then - log "📊 VM Status: $VM_STATUS" - fi - - sleep 15 - fi - ((attempt++)) - done - - if [ -z "$VM_IP" ] || [ "$VM_IP" = "None" ] || [ "$VM_IP" = "null" ]; then - log_error "❌ Template VM failed to get IP address after $((max_attempts * 15)) seconds" - log_error "Guest agent may not be running or network configuration issue" - log_error "Check VM console on Unraid: virsh console $VM_NAME" - exit 1 - fi - - # Phase 2: Wait for SSH connectivity (should be very fast for templates) - log "🔍 Phase 2: Testing SSH connectivity to $VM_IP..." - wait_for_ssh_connectivity "$VM_IP" -} - -# Wait for SSH connectivity to be available -wait_for_ssh_connectivity() { - local vm_ip="$1" - local max_ssh_attempts=20 # 5 minutes max wait for SSH - local ssh_attempt=1 - - while [ $ssh_attempt -le $max_ssh_attempts ]; do - log "🔑 Testing SSH connection to $vm_ip... (attempt $ssh_attempt/$max_ssh_attempts)" - - # Test SSH connectivity with a simple command - if ssh -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o BatchMode=yes "$VM_NAME" "echo 'SSH connection successful'" >/dev/null 2>&1; then - log_success "✅ SSH connectivity established to template VM! 🚀" - return 0 - fi - - # More detailed error for first few attempts - if [ $ssh_attempt -le 3 ]; then - log "⏳ SSH not ready yet - VM may still be booting or initializing SSH service..." - else - log "⏳ Still waiting for SSH... ($(($ssh_attempt * 15))s elapsed)" - fi - - sleep 15 - ((ssh_attempt++)) - done - - log_error "❌ SSH connection failed after $((max_ssh_attempts * 15)) seconds" - log_error "VM IP: $vm_ip" - log_error "Try manually: ssh $VM_NAME" - log_error "Check VM console on Unraid for boot issues" - exit 1 -} -# Configure VM for ThrillWiki using template-optimized deployment -configure_template_vm() { - log "🚀 Deploying ThrillWiki to template VM..." - log "This will sync the project files and set up the application" - - # First, sync the current project files to the VM - deploy_project_files - - # Then run the setup script on the VM - run_vm_setup_script - - log_success "✅ Template VM configured and application deployed! ⚡" -} - -# Configure passwordless sudo for required operations -configure_passwordless_sudo() { - log "⚙️ Configuring passwordless sudo for deployment operations..." - - # Create sudoers configuration file for thrillwiki user - local sudoers_config="/tmp/thrillwiki-sudoers" - - cat > "$sudoers_config" << 'EOF' -# ThrillWiki deployment sudo configuration -# Allow thrillwiki user to run specific commands without password - -# File system operations for deployment -thrillwiki ALL=(ALL) NOPASSWD: /bin/rm, /bin/mkdir, /bin/chown, /bin/chmod - -# Package management for updates -thrillwiki ALL=(ALL) NOPASSWD: /usr/bin/apt, /usr/bin/apt-get, /usr/bin/apt-cache - -# System service management -thrillwiki ALL=(ALL) NOPASSWD: /bin/systemctl - -# PostgreSQL management -thrillwiki ALL=(ALL) NOPASSWD: /usr/bin/sudo -u postgres * - -# Service file management -thrillwiki ALL=(ALL) NOPASSWD: /bin/cp [AWS-SECRET-REMOVED]emd/* /etc/systemd/system/ -thrillwiki ALL=(ALL) NOPASSWD: /bin/sed -i * /etc/systemd/system/thrillwiki.service -EOF - - # Copy sudoers file to VM and install it - log "📋 Copying sudoers configuration to VM..." - scp "$sudoers_config" "$VM_NAME:/tmp/" - - # Install sudoers configuration (this requires password once) - log "Installing sudo configuration (may require password this one time)..." - if ssh -t "$VM_NAME" "sudo cp /tmp/thrillwiki-sudoers /etc/sudoers.d/thrillwiki && sudo chmod 440 /etc/sudoers.d/thrillwiki && sudo visudo -c"; then - log_success "✅ Passwordless sudo configured successfully" - else - log_error "Failed to configure passwordless sudo. Setup will continue but may prompt for passwords." - # Continue anyway, as the user might have already configured this - fi - - # Cleanup - rm -f "$sudoers_config" - ssh "$VM_NAME" "rm -f /tmp/thrillwiki-sudoers" -} - -# Validate GitHub token and repository access -validate_github_access() { - log "🔍 Validating GitHub token and repository access..." - - # Extract repository path from REPO_URL - local repo_path=$(echo "$REPO_URL" | sed 's|^https://github.com/||' | sed 's|/$||') - if [ -z "$repo_path" ]; then - repo_path="pacnpal/thrillwiki_django_no_react" # fallback - log_warning "Using fallback repository path: $repo_path" - fi - - # Test GitHub API authentication - log "Testing GitHub API authentication..." - if ! curl -sf -H "Authorization: token $GITHUB_TOKEN" "https://api.github.com/user" > /dev/null; then - log_error "❌ GitHub token authentication failed!" - log_error "The token cannot authenticate with GitHub API." - - if [ "$NON_INTERACTIVE" = "true" ]; then - log_error "Non-interactive mode: Cannot prompt for new token." - log_error "Please update your GITHUB_TOKEN environment variable with a valid token." - exit 1 - fi - - echo - echo "❌ Your GitHub token is invalid or expired!" - echo "Please create a new Personal Access Token at: https://github.com/settings/tokens" - echo "Required permissions: repo (full control of private repositories)" - echo - read -s -p "Enter a new GitHub Personal Access Token: " GITHUB_TOKEN - echo - - if [ -z "$GITHUB_TOKEN" ]; then - log_error "No token provided. Cannot continue." - return 1 - fi - - # Save the new token - save_github_token - - # Test the new token - if ! curl -sf -H "Authorization: token $GITHUB_TOKEN" "https://api.github.com/user" > /dev/null; then - log_error "❌ New token is also invalid. Please check your token and try again." - return 1 - fi - - log_success "✅ New GitHub token validated successfully" - else - log_success "✅ GitHub token authentication successful" - fi - - # Test repository access - log "Testing repository access: $repo_path" - local repo_response=$(curl -sf -H "Authorization: token $GITHUB_TOKEN" "https://api.github.com/repos/$repo_path") - - if [ $? -ne 0 ] || [ -z "$repo_response" ]; then - log_error "❌ Cannot access repository: $repo_path" - log_error "This could be due to:" - log_error "1. Repository doesn't exist" - log_error "2. Repository is private and token lacks access" - log_error "3. Token doesn't have 'repo' permissions" - - if [ "$NON_INTERACTIVE" = "true" ]; then - log_error "Non-interactive mode: Cannot prompt for new repository." - log_error "Please update your repository URL or token permissions." - return 1 - fi - - echo - echo "❌ Cannot access repository: $REPO_URL" - echo "Current repository path: $repo_path" - echo - echo "The token has these scopes: $(curl -sf -H "Authorization: token $GITHUB_TOKEN" -I "https://api.github.com/user" | grep -i "x-oauth-scopes:" | cut -d: -f2 | xargs || echo "unknown")" - echo "Required scope: 'repo' (full control of private repositories)" - echo - echo "Options:" - echo "1. Enter a new GitHub token with 'repo' permissions" - echo "2. Enter a different repository URL" - echo "3. Exit and fix token permissions at https://github.com/settings/tokens" - echo - read -p "Select option (1-3): " repo_access_choice - - case $repo_access_choice in - 1) - echo - echo "Please create a new GitHub Personal Access Token:" - echo "1. Go to: https://github.com/settings/tokens/new" - echo "2. Give it a name like 'ThrillWiki Template Automation'" - echo "3. Check the 'repo' scope (full control of private repositories)" - echo "4. Click 'Generate token'" - echo "5. Copy the new token" - echo - read -s -p "Enter new GitHub Personal Access Token: " new_github_token - echo - - if [ -z "$new_github_token" ]; then - log_error "No token provided. Cannot continue." - return 1 - fi - - # Test the new token - log "Testing new GitHub token..." - if ! curl -sf -H "Authorization: token $new_github_token" "https://api.github.com/user" > /dev/null; then - log_error "❌ New token authentication failed. Please check your token." - return 1 - fi - - # Test repository access with new token - log "Testing repository access with new token: $repo_path" - local new_repo_response=$(curl -sf -H "Authorization: token $new_github_token" "https://api.github.com/repos/$repo_path") - - if [ $? -ne 0 ] || [ -z "$new_repo_response" ]; then - log_error "❌ New token still cannot access the repository." - log_error "Please ensure the token has 'repo' scope and try again." - return 1 - fi - - # Token works! Update it - GITHUB_TOKEN="$new_github_token" - log_success "✅ New GitHub token validated successfully" - - # Show new token scopes - local new_scopes=$(curl -sf -H "Authorization: token $GITHUB_TOKEN" -I "https://api.github.com/user" | grep -i "x-oauth-scopes:" | cut -d: -f2 | xargs || echo "unknown") - log "New token scopes: $new_scopes" - - # Save the new token - save_github_token - - # Continue with validation using the new token - repo_response="$new_repo_response" - ;; - 2) - echo - read -p "Enter new repository URL: " new_repo_url - - if [ -z "$new_repo_url" ]; then - log "Setup cancelled by user" - exit 0 - fi - - REPO_URL="$new_repo_url" - - # Extract new repo path and test again - repo_path=$(echo "$REPO_URL" | sed 's|^https://github.com/||' | sed 's|/$||') - log "Testing new repository: $repo_path" - - repo_response=$(curl -sf -H "Authorization: token $GITHUB_TOKEN" "https://api.github.com/repos/$repo_path") - if [ $? -ne 0 ] || [ -z "$repo_response" ]; then - log_error "❌ New repository is also inaccessible. Please check the URL and token permissions." - return 1 - fi - - log_success "✅ New repository validated successfully" - - # Update saved configuration with new repo URL - save_config - ;; - 3|"") - log "Setup cancelled by user" - echo "Please update your token permissions at: https://github.com/settings/tokens" - return 1 - ;; - *) - log_error "Invalid choice. Please select 1, 2, or 3." - return 1 - ;; - esac - else - log_success "✅ Repository access confirmed: $repo_path" - fi - - # Show repository info - local repo_name=$(echo "$repo_response" | python3 -c "import sys, json; print(json.load(sys.stdin).get('full_name', 'Unknown'))" 2>/dev/null || echo "$repo_path") - local repo_private=$(echo "$repo_response" | python3 -c "import sys, json; print(json.load(sys.stdin).get('private', False))" 2>/dev/null || echo "Unknown") - - log "📊 Repository info:" - echo " Name: $repo_name" - echo " Private: $repo_private" - echo " URL: $REPO_URL" -} - -# Clone project from GitHub using PAT authentication -deploy_project_files() { - log "🔄 Cloning project from GitHub repository..." - - # Validate GitHub access before attempting clone - if ! validate_github_access; then - log_error "❌ GitHub token validation failed during deployment." - log_error "Cannot proceed with repository cloning without valid GitHub access." - exit 1 - fi - - # First, configure passwordless sudo for required operations - configure_passwordless_sudo - - # Remove any existing directory first - ssh "$VM_NAME" "sudo rm -rf /home/thrillwiki/thrillwiki" - - # Create parent directory - ssh "$VM_NAME" "sudo mkdir -p /home/thrillwiki && sudo chown thrillwiki:thrillwiki /home/thrillwiki" - - # Clone the repository using PAT authentication - # Extract repository path from REPO_URL (already validated) - local repo_path=$(echo "$REPO_URL" | sed 's|^https://github.com/||' | sed 's|/$||') - local auth_url="https://${GITHUB_USERNAME}:${GITHUB_TOKEN}@github.com/${repo_path}.git" - - log "Cloning repository: $REPO_URL" - if ssh "$VM_NAME" "cd /home/thrillwiki && git clone '$auth_url' thrillwiki"; then - log_success "✅ Repository cloned successfully from GitHub!" - else - log_error "❌ Failed to clone repository from GitHub" - log_error "Repository access was validated, but clone failed. This may be due to:" - log_error "1. Network connectivity issues from VM to GitHub" - log_error "2. Git not installed on VM" - log_error "3. Disk space issues on VM" - log_error "Try manually: ssh $VM_NAME 'git --version && df -h'" - exit 1 - fi - - # Set proper ownership - ssh "$VM_NAME" "sudo chown -R thrillwiki:thrillwiki /home/thrillwiki/thrillwiki" - - # Show repository info - local commit_info=$(ssh "$VM_NAME" "cd /home/thrillwiki/thrillwiki && git log -1 --oneline") - log "📊 Cloned repository at commit: $commit_info" - - # Remove the authentication URL from git config for security - ssh "$VM_NAME" "cd /home/thrillwiki/thrillwiki && git remote set-url origin $REPO_URL" - log "🔒 Cleaned up authentication URL from git configuration" -} - -# Run setup script on the VM after files are synchronized -run_vm_setup_script() { - log "⚙️ Running application setup on template VM..." - - # Create optimized VM setup script for template VMs - local vm_setup_script="/tmp/template_vm_thrillwiki_setup.sh" - - cat > "$vm_setup_script" << 'EOF' -#!/bin/bash -set -e - -echo "🚀 Setting up ThrillWiki on template VM (optimized for pre-configured templates)..." - -# Navigate to project directory -cd /home/thrillwiki/thrillwiki - -# Template VMs should already have most packages - just update security -echo "📦 Quick system update (template optimization)..." -sudo apt update >/dev/null 2>&1 -if sudo apt list --upgradable 2>/dev/null | grep -q security; then - echo "🔒 Installing security updates..." - sudo apt upgrade -y --with-new-pkgs -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" >/dev/null 2>&1 -else - echo "✅ No security updates needed" -fi - -# UV should already be installed in template -echo "🔧 Checking UV installation..." -# Check multiple possible UV locations -export PATH="/home/thrillwiki/.local/bin:/home/thrillwiki/.cargo/bin:$PATH" -if ! command -v uv > /dev/null 2>&1; then - echo "📥 Installing UV (not found in template)..." - curl -LsSf https://astral.sh/uv/install.sh | sh - - # UV installer may put it in .local/bin or .cargo/bin - if [ -f ~/.cargo/env ]; then - source ~/.cargo/env - fi - - # Add both possible paths - export PATH="/home/thrillwiki/.local/bin:/home/thrillwiki/.cargo/bin:$PATH" - - # Verify installation worked - if command -v uv > /dev/null 2>&1; then - echo "✅ UV installed successfully at: $(which uv)" - else - echo "❌ UV installation failed or not in PATH" - echo "Current PATH: $PATH" - echo "Checking possible locations:" - ls -la ~/.local/bin/ 2>/dev/null || echo "~/.local/bin/ not found" - ls -la ~/.cargo/bin/ 2>/dev/null || echo "~/.cargo/bin/ not found" - exit 1 - fi -else - echo "✅ UV already installed at: $(which uv)" -fi - -# PostgreSQL should already be configured in template -echo "🗄️ Checking PostgreSQL..." -if ! sudo systemctl is-active --quiet postgresql; then - echo "▶️ Starting PostgreSQL..." - sudo systemctl start postgresql - sudo systemctl enable postgresql -else - echo "✅ PostgreSQL already running" -fi - -# Configure database if not already done -echo "🔧 Setting up database..." -sudo -u postgres createdb thrillwiki 2>/dev/null || echo "📋 Database may already exist" -sudo -u postgres createuser thrillwiki_user 2>/dev/null || echo "👤 User may already exist" -sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE thrillwiki TO thrillwiki_user;" 2>/dev/null || echo "🔑 Privileges may already be set" - -# Install Python dependencies with UV -echo "📦 Installing Python dependencies..." -UV_CMD="$(which uv)" -if [ -n "$UV_CMD" ] && "$UV_CMD" sync; then - echo "✅ UV sync completed successfully" -else - echo "⚠️ UV sync failed, falling back to pip..." - python3 -m venv .venv - source .venv/bin/activate - pip install -e . -fi - -# Create necessary directories -echo "📁 Creating directories..." -mkdir -p logs backups static media - -# Make scripts executable -echo "⚡ Making scripts executable..." -find scripts -name "*.sh" -exec chmod +x {} \; 2>/dev/null || echo "ℹ️ No shell scripts found" - -# Run Django setup -echo "🌍 Running Django setup..." -UV_CMD="$(which uv)" -echo " 🔄 Running migrations..." -if [ -n "$UV_CMD" ] && "$UV_CMD" run python manage.py migrate; then - echo " ✅ Migrations completed" -else - echo " ⚠️ UV run failed, trying direct Python..." - python3 manage.py migrate -fi - -echo " 📦 Collecting static files..." -if [ -n "$UV_CMD" ] && "$UV_CMD" run python manage.py collectstatic --noinput; then - echo " ✅ Static files collected" -else - echo " ⚠️ UV run failed, trying direct Python..." - python3 manage.py collectstatic --noinput -fi - -# Install systemd services if available -if [ -f scripts/systemd/thrillwiki.service ]; then - echo "🔧 Installing systemd service..." - sudo cp scripts/systemd/thrillwiki.service /etc/systemd/system/ - # Fix the home directory path for thrillwiki user - sudo sed -i 's|/home/ubuntu|/home/thrillwiki|g' /etc/systemd/system/thrillwiki.service - sudo systemctl daemon-reload - sudo systemctl enable thrillwiki.service - - if sudo systemctl start thrillwiki.service; then - echo "✅ ThrillWiki service started successfully" - else - echo "⚠️ Service start failed, checking logs..." - sudo systemctl status thrillwiki.service --no-pager -l - fi -else - echo "ℹ️ No systemd service files found, ThrillWiki ready for manual start" - echo "💡 You can start it manually with: uv run python manage.py runserver 0.0.0.0:8000" -fi - -# Test the application -echo "🧪 Testing application..." -sleep 3 -if curl -f http://localhost:8000 >/dev/null 2>&1; then - echo "✅ ThrillWiki is responding on port 8000!" -else - echo "⚠️ ThrillWiki may not be responding yet (this is normal for first start)" -fi - -# Setup auto-pull functionality -echo "🔄 Setting up auto-pull functionality..." - -# Create ***REMOVED*** file with GitHub token for auto-pull authentication -if [ -n "${GITHUB_TOKEN:-}" ]; then - echo "GITHUB_TOKEN=$GITHUB_TOKEN" > ***REMOVED*** - echo "✅ GitHub token configured for auto-pull" -else - echo "⚠️ GITHUB_TOKEN not found - auto-pull will use fallback mode" - echo "# GitHub token not available during setup" > ***REMOVED*** -fi - -# Ensure scripts/vm directory exists and make auto-pull script executable -if [ -f "scripts/vm/auto-pull.sh" ]; then - chmod +x scripts/vm/auto-pull.sh - - # Create cron job for auto-pull (every 10 minutes) - echo "⏰ Installing cron job for auto-pull (every 10 minutes)..." - - # Create cron entry - CRON_ENTRY="*/10 * * * * [AWS-SECRET-REMOVED]uto-pull.sh >> /home/thrillwiki/logs/cron.log 2>&1" - - # Install cron job if not already present - if ! crontab -l 2>/dev/null | grep -q "auto-pull.sh"; then - # Add to existing crontab or create new one - (crontab -l 2>/dev/null || echo "") | { - cat - echo "# ThrillWiki Auto-Pull - Update repository every 10 minutes" - echo "$CRON_ENTRY" - } | crontab - - - echo "✅ Auto-pull cron job installed successfully" - echo "📋 Cron job: $CRON_ENTRY" - else - echo "✅ Auto-pull cron job already exists" - fi - - # Ensure cron service is running - if ! systemctl is-active --quiet cron 2>/dev/null; then - echo "▶️ Starting cron service..." - sudo systemctl start cron - sudo systemctl enable cron - else - echo "✅ Cron service is already running" - fi - - # Test auto-pull script - echo "🧪 Testing auto-pull script..." - if timeout 30 ./scripts/vm/auto-pull.sh --status; then - echo "✅ Auto-pull script test successful" - else - echo "⚠️ Auto-pull script test failed or timed out (this may be normal)" - fi - - echo "📋 Auto-pull setup completed:" - echo " - Script: [AWS-SECRET-REMOVED]uto-pull.sh" - echo " - Schedule: Every 10 minutes" - echo " - Logs: /home/thrillwiki/logs/auto-pull.log" - echo " - Status: Run './scripts/vm/auto-pull.sh --status' to check" - -else - echo "⚠️ Auto-pull script not found, skipping auto-pull setup" -fi - -echo "🎉 Template VM ThrillWiki setup completed successfully! ⚡" -echo "🌐 Application should be available at http://$(hostname -I | awk '{print $1}'):8000" -echo "🔄 Auto-pull: Repository will be updated every 10 minutes automatically" -EOF - - # Copy setup script to VM with progress - log "📋 Copying setup script to VM..." - scp "$vm_setup_script" "$VM_NAME:/tmp/" - - # Make it executable and run it - ssh "$VM_NAME" "chmod +x /tmp/template_vm_thrillwiki_setup.sh" - - log "⚡ Executing setup script on VM (this may take a few minutes)..." - if ssh "$VM_NAME" "bash /tmp/template_vm_thrillwiki_setup.sh"; then - log_success "✅ Application setup completed successfully!" - else - log_error "❌ Application setup failed" - log "Try debugging with: ssh $VM_NAME 'journalctl -u thrillwiki -f'" - exit 1 - fi - - # Cleanup - rm -f "$vm_setup_script" -} - -# Start services -start_template_services() { - log "Starting ThrillWiki services on template VM..." - - # Start VM service - ssh "$VM_NAME" "sudo systemctl start thrillwiki 2>/dev/null || echo 'Service may need manual start'" - - # Verify service is running - if ssh "$VM_NAME" "systemctl is-active --quiet thrillwiki 2>/dev/null"; then - log_success "ThrillWiki service started successfully on template VM ⚡" - else - log_warning "ThrillWiki service may need manual configuration" - log "Try: ssh $VM_NAME 'systemctl status thrillwiki'" - fi - - # Get service status - log "Template VM service status:" - ssh "$VM_NAME" "systemctl status thrillwiki --no-pager -l 2>/dev/null || echo 'Service status not available'" -} - -# Setup webhook listener -setup_template_webhook_listener() { - log "Setting up webhook listener for template deployments..." - - # Create webhook start script - cat > "$PROJECT_DIR/start-template-webhook.sh" << 'EOF' -#!/bin/bash -cd "$(dirname "$0")" -source ***REMOVED***.webhook -echo "Starting webhook listener for template-based deployments ⚡" -python3 scripts/webhook-listener.py -EOF - - chmod +x "$PROJECT_DIR/start-template-webhook.sh" - - log_success "Template webhook listener configured" - log "You can start the webhook listener with: ./start-template-webhook.sh" -} - -# Perform end-to-end test -test_template_deployment() { - log "Performing end-to-end template deployment test..." - - # Test VM connectivity - if ssh "$VM_NAME" "echo 'Template VM connectivity test passed'"; then - log_success "Template VM connectivity test passed ⚡" - else - log_error "Template VM connectivity test failed" - return 1 - fi - - # Test ThrillWiki service - if ssh "$VM_NAME" "curl -f http://localhost:8000 >/dev/null 2>&1"; then - log_success "ThrillWiki service test passed on template VM ⚡" - else - log_warning "ThrillWiki service test failed - checking logs..." - ssh "$VM_NAME" "journalctl -u thrillwiki --no-pager -l | tail -20 2>/dev/null || echo 'Service logs not available'" - fi - - # Test template deployment script - log "Testing template deployment capabilities..." - cd "$PROJECT_DIR/scripts/unraid" - ./template-utils.sh check && log_success "Template utilities working ⚡" - - log_success "End-to-end template deployment test completed ⚡" -} - -# Generate final instructions for template deployment -generate_template_instructions() { - log "Generating final template deployment instructions..." - - cat > "$PROJECT_DIR/TEMPLATE_SETUP_COMPLETE.md" << EOF -# ThrillWiki Template-Based Automation - Setup Complete! 🚀⚡ - -Your ThrillWiki template-based CI/CD system has been fully automated and deployed! - -## Template Deployment Benefits ⚡ - -- **Speed**: 2-5 minute VM deployment vs 20-30 minutes with autoinstall -- **Reliability**: Pre-configured template eliminates installation failures -- **Efficiency**: Copy-on-write disk format saves space - -## VM Information - -- **VM Name**: $VM_NAME -- **Template VM**: $TEMPLATE_VM_NAME -- **VM IP**: $VM_IP -- **SSH Access**: \`ssh $VM_NAME\` -- **Deployment Type**: Template-based ⚡ - -## Services Status - -- **ThrillWiki Service**: Running on template VM -- **Database**: PostgreSQL configured in template -- **Web Server**: Available at http://$VM_IP:8000 - -## Next Steps - -### 1. Start Template Webhook Listener -\`\`\`bash -./start-template-webhook.sh -\`\`\` - -### 2. Configure GitHub Webhook -- Go to your repository: $REPO_URL -- Settings → Webhooks → Add webhook -- **Payload URL**: http://YOUR_PUBLIC_IP:$WEBHOOK_PORT/webhook -- **Content type**: application/json -- **Secret**: (your webhook secret) -- **Events**: Just the push event - -### 3. Test the Template System -\`\`\`bash -# Test template VM connection -ssh $VM_NAME - -# Test service status -ssh $VM_NAME "systemctl status thrillwiki" - -# Test template utilities -cd scripts/unraid -./template-utils.sh check -./template-utils.sh info - -# Deploy another VM from template (fast!) -./template-utils.sh deploy test-vm-2 - -# Make a test commit to trigger automatic deployment -git add . -git commit -m "Test automated template deployment" -git push origin main -\`\`\` - -## Template Management Commands - -### Template VM Management -\`\`\`bash -# Check template status and info -./scripts/unraid/template-utils.sh status -./scripts/unraid/template-utils.sh info - -# List all template-based VMs -./scripts/unraid/template-utils.sh list - -# Deploy new VM from template (2-5 minutes!) -./scripts/unraid/template-utils.sh deploy VM_NAME - -# Copy template to new VM -./scripts/unraid/template-utils.sh copy VM_NAME -\`\`\` - -### Python Template Scripts -\`\`\`bash -# Template-based deployment -python3 scripts/unraid/main_template.py deploy - -# Template management -python3 scripts/unraid/main_template.py template info -python3 scripts/unraid/main_template.py template check -python3 scripts/unraid/main_template.py template list - -# VM operations (fast with templates!) -python3 scripts/unraid/main_template.py setup -python3 scripts/unraid/main_template.py start -python3 scripts/unraid/main_template.py ip -python3 scripts/unraid/main_template.py status -\`\`\` - -### Service Management on Template VM -\`\`\`bash -# Check service status -ssh $VM_NAME "systemctl status thrillwiki" - -# Restart service -ssh $VM_NAME "sudo systemctl restart thrillwiki" - -# View logs -ssh $VM_NAME "journalctl -u thrillwiki -f" -\`\`\` - -## Template Maintenance - -### Updating Your Template VM -\`\`\`bash -# Get update instructions -./scripts/unraid/template-utils.sh update - -# After updating template VM manually: -./scripts/unraid/template-utils.sh check -\`\`\` - -### Creating Additional Template VMs -You can create multiple template VMs for different purposes: -- Development: \`thrillwiki-template-dev\` -- Staging: \`thrillwiki-template-staging\` -- Production: \`thrillwiki-template-prod\` - -## Troubleshooting - -### Template VM Issues -1. **Template not found**: Verify template VM exists and is stopped -2. **Template VM running**: Stop template before creating instances -3. **Deployment slow**: Template should be 5-10x faster than autoinstall - -### Common Commands -\`\`\`bash -# Check if template is ready -./scripts/unraid/template-utils.sh check - -# Test template VM connectivity -ssh root@unraid-server "virsh domstate $TEMPLATE_VM_NAME" - -# Force stop template VM if needed -ssh root@unraid-server "virsh shutdown $TEMPLATE_VM_NAME" -\`\`\` - -### Support Files -- Template Configuration: \`.thrillwiki-template-config\` -- Environment: \`***REMOVED***.unraid\`, \`***REMOVED***.webhook\` -- Logs: \`logs/\` directory -- Documentation: \`scripts/unraid/README-template-deployment.md\` - -## Performance Comparison - -| Operation | Autoinstall | Template | Improvement | -|-----------|------------|----------|-------------| -| VM Creation | 20-30 min | 2-5 min | **5-6x faster** | -| Boot Time | Full install | Instant | **Instant** | -| Reliability | ISO issues | Pre-tested | **Much higher** | -| Total Deploy | 45+ min | ~10 min | **4-5x faster** | - -**Your template-based automated CI/CD system is now ready!** 🚀⚡ - -Every push to the main branch will automatically deploy to your template VM in minutes, not hours! -EOF - - log_success "Template setup instructions saved to TEMPLATE_SETUP_COMPLETE.md" -} - -# Main automation function -main() { - log "🚀⚡ Starting ThrillWiki Template-Based Complete Unraid Automation" - echo "[AWS-SECRET-REMOVED]==========================" - echo - log_template "Template deployment is 5-10x FASTER than autoinstall approach!" - echo - - # Create logs directory - mkdir -p "$LOG_DIR" - - # Handle reset modes - if [[ "$RESET_ALL" == "true" ]]; then - log "🔄 Complete reset mode - deleting VM and configuration" - echo - - # Load configuration first to get connection details for VM deletion - if [[ -f "$CONFIG_FILE" ]]; then - source "$CONFIG_FILE" - log_success "Loaded existing configuration for VM deletion" - else - log_warning "No configuration file found, will skip VM deletion" - fi - - # Delete existing VM if config exists - if [[ -f "$CONFIG_FILE" ]]; then - log "🗑️ Deleting existing template VM..." - - # Check if ***REMOVED***.unraid file exists - if [ -f "$PROJECT_DIR/***REMOVED***.unraid" ]; then - log "Loading environment from ***REMOVED***.unraid..." - set -a - source "$PROJECT_DIR/***REMOVED***.unraid" 2>/dev/null || true - set +a - else - log_warning "***REMOVED***.unraid file not found - VM deletion may not work properly" - log "The VM may not exist or may have been deleted manually" - fi - - # Stop existing VM if running before deletion (for complete reset) - log "🛑 Ensuring VM is stopped before deletion..." - if [ -n "${VM_NAME:-}" ] && [ -n "${UNRAID_HOST:-}" ] && [ -n "${UNRAID_USER:-}" ]; then - if ! stop_existing_vm_for_reset "$VM_NAME" "$UNRAID_HOST" "$UNRAID_USER"; then - log_warning "Failed to stop VM '$VM_NAME' - continuing anyway for complete reset" - log_warning "VM may be forcibly deleted during reset process" - fi - else - log_warning "Missing VM connection details - skipping VM shutdown check" - fi - - # Debug environment loading - log "Debug: VM_NAME=${VM_NAME:-'not set'}" - log "Debug: UNRAID_HOST=${UNRAID_HOST:-'not set'}" - - # Check if main_template.py exists - if [ ! -f "$SCRIPT_DIR/main_template.py" ]; then - log_error "main_template.py not found at: $SCRIPT_DIR/main_template.py" - log "Available files in $SCRIPT_DIR:" - ls -la "$SCRIPT_DIR" - log "Skipping VM deletion due to missing script..." - elif [ -z "${VM_NAME:-}" ] || [ -z "${UNRAID_HOST:-}" ]; then - log_warning "Missing required environment variables for VM deletion" - log "VM_NAME: ${VM_NAME:-'not set'}" - log "UNRAID_HOST: ${UNRAID_HOST:-'not set'}" - log "Skipping VM deletion - VM may not exist or was deleted manually" - else - log "Found main_template.py at: $SCRIPT_DIR/main_template.py" - - # Run delete with timeout and better error handling - log "Attempting VM deletion with timeout..." - if timeout 60 python3 "$SCRIPT_DIR/main_template.py" delete 2>&1; then - log_success "Template VM deleted successfully" - else - deletion_exit_code=$? - if [ $deletion_exit_code -eq 124 ]; then - log_error "⚠️ VM deletion timed out after 60 seconds" - else - log "⚠️ Template VM deletion failed (exit code: $deletion_exit_code) or VM didn't exist" - fi - - # Continue anyway since this might be expected - log "Continuing with script execution..." - fi - fi - fi - - # Remove configuration files - if [[ -f "$CONFIG_FILE" ]]; then - rm "$CONFIG_FILE" - log_success "Template configuration file removed" - fi - - # Remove GitHub token file - if [[ -f "$TOKEN_FILE" ]]; then - rm "$TOKEN_FILE" - log_success "GitHub token file removed" - fi - - # Remove environment files - rm -f "$PROJECT_DIR/***REMOVED***.unraid" "$PROJECT_DIR/***REMOVED***.webhook" - log_success "Environment files removed" - - log_success "Complete reset finished - continuing with fresh template setup" - echo - - elif [[ "$RESET_VM_ONLY" == "true" ]]; then - log "🔄 VM-only reset mode - deleting VM, preserving configuration" - echo - - # Load configuration to get connection details - if [[ -f "$CONFIG_FILE" ]]; then - source "$CONFIG_FILE" - log_success "Loaded existing configuration" - else - log_error "No configuration file found. Cannot reset VM without connection details." - echo " Run the script without reset flags first to create initial configuration." - exit 1 - fi - - # Stop existing VM if running before deletion - log "🛑 Ensuring VM is stopped before deletion..." - if ! stop_existing_vm_for_reset "$VM_NAME" "$UNRAID_HOST" "$UNRAID_USER"; then - log_error "Failed to stop VM '$VM_NAME'. Cannot proceed safely with VM deletion." - log_error "Please manually stop the VM or resolve the connection issue." - exit 1 - fi - - # Delete existing VM - log "🗑️ Deleting existing template VM..." - - # Check if ***REMOVED***.unraid file exists - if [ -f "$PROJECT_DIR/***REMOVED***.unraid" ]; then - log "Loading environment from ***REMOVED***.unraid..." - set -a - source "$PROJECT_DIR/***REMOVED***.unraid" 2>/dev/null || true - set +a - else - log_warning "***REMOVED***.unraid file not found - VM deletion may not work properly" - log "The VM may not exist or may have been deleted manually" - fi - - # Debug environment loading - log "Debug: VM_NAME=${VM_NAME:-'not set'}" - log "Debug: UNRAID_HOST=${UNRAID_HOST:-'not set'}" - - # Check if main_template.py exists - if [ ! -f "$SCRIPT_DIR/main_template.py" ]; then - log_error "main_template.py not found at: $SCRIPT_DIR/main_template.py" - log "Available files in $SCRIPT_DIR:" - ls -la "$SCRIPT_DIR" - log "Skipping VM deletion due to missing script..." - elif [ -z "${VM_NAME:-}" ] || [ -z "${UNRAID_HOST:-}" ]; then - log_warning "Missing required environment variables for VM deletion" - log "VM_NAME: ${VM_NAME:-'not set'}" - log "UNRAID_HOST: ${UNRAID_HOST:-'not set'}" - log "Skipping VM deletion - VM may not exist or was deleted manually" - else - log "Found main_template.py at: $SCRIPT_DIR/main_template.py" - - # Run delete with timeout and better error handling - log "Attempting VM deletion with timeout..." - if timeout 60 python3 "$SCRIPT_DIR/main_template.py" delete 2>&1; then - log_success "Template VM deleted successfully" - else - deletion_exit_code=$? - if [ $deletion_exit_code -eq 124 ]; then - log_error "⚠️ VM deletion timed out after 60 seconds" - else - log "⚠️ Template VM deletion failed (exit code: $deletion_exit_code) or VM didn't exist" - fi - - # Continue anyway since this might be expected - log "Continuing with script execution..." - fi - fi - - # Remove only environment files, keep main config - rm -f "$PROJECT_DIR/***REMOVED***.unraid" "$PROJECT_DIR/***REMOVED***.webhook" - log_success "Environment files removed, configuration preserved" - - # Check if GitHub token is available for VM recreation - if [ "$GITHUB_API_ENABLED" = "true" ] && [ -n "$GITHUB_USERNAME" ]; then - log "🔍 Checking for GitHub token availability..." - - # Try to load token from saved file - if load_github_token; then - log_success "✅ GitHub token loaded from secure storage" - elif GITHUB_TOKEN=$(python3 "$SCRIPT_DIR/../github-auth.py" token 2>/dev/null) && [ -n "$GITHUB_TOKEN" ]; then - log_success "✅ GitHub token obtained from authentication script" - - # Validate the token can access the repository immediately - log "🔍 Validating token can access repository..." - if ! validate_github_access; then - log_error "❌ GitHub token validation failed during VM reset." - log_error "Please check your token and repository access before recreating the VM." - return 1 - fi - - # Save the token for future use - save_github_token - else - log_warning "⚠️ No GitHub token found - you'll need to provide it" - echo "GitHub authentication is required for repository cloning and auto-pull." - echo - - if [ "$NON_INTERACTIVE" = "true" ]; then - if [ -n "${GITHUB_TOKEN:-}" ]; then - log "Using token from environment variable" - save_github_token - else - log_error "GITHUB_TOKEN environment variable not set for non-interactive mode" - log_error "Set: export GITHUB_TOKEN='your_token'" - exit 1 - fi - else - read -s -p "Enter GitHub Personal Access Token: " GITHUB_TOKEN - echo - - if [ -n "$GITHUB_TOKEN" ]; then - save_github_token - log_success "✅ GitHub token saved for VM recreation" - else - log_error "GitHub token is required for repository operations" - exit 1 - fi - fi - fi - fi - - log_success "VM reset complete - will recreate VM with saved configuration" - echo - - elif [[ "$RESET_CONFIG_ONLY" == "true" ]]; then - log "🔄 Config-only reset mode - deleting configuration, preserving VM" - echo - - # Remove configuration files - if [[ -f "$CONFIG_FILE" ]]; then - rm "$CONFIG_FILE" - log_success "Template configuration file removed" - fi - - # Remove environment files - rm -f "$PROJECT_DIR/***REMOVED***.unraid" "$PROJECT_DIR/***REMOVED***.webhook" - log_success "Environment files removed" - - log_success "Configuration reset complete - will prompt for fresh configuration" - echo - fi - - # Collect configuration - prompt_template_config - - # Setup steps - setup_ssh_keys - setup_unraid_access - create_environment_files - install_dependencies - create_template_vm - wait_for_template_vm - configure_template_vm - start_template_services - setup_template_webhook_listener - test_template_deployment - generate_template_instructions - - echo - log_success "🎉⚡ Template-based complete automation setup finished!" - echo - log "Your ThrillWiki template VM is running at: http://$VM_IP:8000" - log "Start the webhook listener: ./start-template-webhook.sh" - log "See TEMPLATE_SETUP_COMPLETE.md for detailed instructions" - echo - log_template "🚀 Template deployment is 5-10x FASTER than traditional autoinstall!" - log "The system will now automatically deploy in MINUTES when you push to GitHub!" -} - -# Run main function and log output -main "$@" 2>&1 | tee "$LOG_DIR/template-automation.log" diff --git a/shared/scripts/unraid/template-utils.sh b/shared/scripts/unraid/template-utils.sh deleted file mode 100755 index 61ed9945..00000000 --- a/shared/scripts/unraid/template-utils.sh +++ /dev/null @@ -1,249 +0,0 @@ -#!/bin/bash -# -# ThrillWiki Template VM Management Utilities -# Quick helpers for managing template VMs on Unraid -# - -# Set strict mode -set -e - -# Colors for output -RED='\033[0;31m' -GREEN='\033[0;32m' -YELLOW='\033[1;33m' -BLUE='\033[0;34m' -NC='\033[0m' # No Color - -log() { - echo -e "${BLUE}[TEMPLATE]${NC} $1" -} - -log_success() { - echo -e "${GREEN}[SUCCESS]${NC} $1" -} - -log_warning() { - echo -e "${YELLOW}[WARNING]${NC} $1" -} - -log_error() { - echo -e "${RED}[ERROR]${NC} $1" -} - -# Configuration -SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)" - -# Load environment variables if available -if [[ -f "$PROJECT_DIR/***REMOVED***.unraid" ]]; then - source "$PROJECT_DIR/***REMOVED***.unraid" -else - log_error "No ***REMOVED***.unraid file found. Please run setup-complete-automation.sh first." - exit 1 -fi - -# Function to show help -show_help() { - echo "ThrillWiki Template VM Management Utilities" - echo "" - echo "Usage:" - echo " $0 check Check if template exists and is ready" - echo " $0 info Show template information" - echo " $0 list List all template-based VM instances" - echo " $0 copy VM_NAME Copy template to new VM" - echo " $0 deploy VM_NAME Deploy complete VM from template" - echo " $0 status Show template VM status" - echo " $0 update Update template VM (instructions)" - echo " $0 autopull Manage auto-pull functionality" - echo "" - echo "Auto-pull Commands:" - echo " $0 autopull status Show auto-pull status on VMs" - echo " $0 autopull enable VM Enable auto-pull on specific VM" - echo " $0 autopull disable VM Disable auto-pull on specific VM" - echo " $0 autopull logs VM Show auto-pull logs from VM" - echo " $0 autopull test VM Test auto-pull on specific VM" - echo "" - echo "Examples:" - echo " $0 check # Verify template is ready" - echo " $0 copy thrillwiki-prod # Copy template to new VM" - echo " $0 deploy thrillwiki-test # Complete deployment from template" - echo " $0 autopull status # Check auto-pull status on all VMs" - echo " $0 autopull logs $VM_NAME # View auto-pull logs" - exit 0 -} - -# Check if required environment variables are set -check_environment() { - if [[ -z "$UNRAID_HOST" ]]; then - log_error "UNRAID_HOST not set. Please configure your environment." - exit 1 - fi - - if [[ -z "$UNRAID_USER" ]]; then - UNRAID_USER="root" - log "Using default UNRAID_USER: $UNRAID_USER" - fi - - log_success "Environment configured: $UNRAID_USER@$UNRAID_HOST" -} - -# Function to run python template manager commands -run_template_manager() { - cd "$SCRIPT_DIR" - export UNRAID_HOST="$UNRAID_HOST" - export UNRAID_USER="$UNRAID_USER" - python3 template_manager.py "$@" -} - -# Function to run template-based main script -run_main_template() { - cd "$SCRIPT_DIR" - - # Export all environment variables - export UNRAID_HOST="$UNRAID_HOST" - export UNRAID_USER="$UNRAID_USER" - export VM_NAME="$1" - export VM_MEMORY="${VM_MEMORY:-4096}" - export VM_VCPUS="${VM_VCPUS:-2}" - export VM_DISK_SIZE="${VM_DISK_SIZE:-50}" - export VM_IP="${VM_IP:-dhcp}" - export REPO_URL="${REPO_URL:-}" - export GITHUB_TOKEN="${GITHUB_TOKEN:-}" - - shift # Remove VM_NAME from arguments - python3 main_template.py "$@" -} - -# Parse command line arguments -case "${1:-}" in - check) - log "🔍 Checking template VM availability..." - check_environment - run_template_manager check - ;; - - info) - log "📋 Getting template VM information..." - check_environment - run_template_manager info - ;; - - list) - log "📋 Listing template-based VM instances..." - check_environment - run_template_manager list - ;; - - copy) - if [[ -z "${2:-}" ]]; then - log_error "VM name is required for copy operation" - echo "Usage: $0 copy VM_NAME" - exit 1 - fi - - log "💾 Copying template to VM: $2" - check_environment - run_template_manager copy "$2" - ;; - - deploy) - if [[ -z "${2:-}" ]]; then - log_error "VM name is required for deploy operation" - echo "Usage: $0 deploy VM_NAME" - exit 1 - fi - - log "🚀 Deploying complete VM from template: $2" - check_environment - run_main_template "$2" deploy - ;; - - status) - log "📊 Checking template VM status..." - check_environment - - # Check template VM status directly - ssh "$UNRAID_USER@$UNRAID_HOST" "virsh domstate thrillwiki-template-ubuntu" 2>/dev/null || { - log_error "Could not check template VM status" - exit 1 - } - ;; - - update) - log "🔄 Template VM update instructions:" - echo "" - echo "To update your template VM:" - echo "1. Start the template VM on Unraid" - echo "2. SSH into the template VM" - echo "3. Update packages: sudo apt update && sudo apt upgrade -y" - echo "4. Update ThrillWiki dependencies if needed" - echo "5. Clean up temporary files: sudo apt autoremove && sudo apt autoclean" - echo "6. Clear bash history: history -c && history -w" - echo "7. Shutdown the template VM: sudo shutdown now" - echo "8. The updated disk is now ready as a template" - echo "" - log_warning "IMPORTANT: Template VM must be stopped before creating new instances" - - check_environment - run_template_manager update - ;; - - autopull) - shift # Remove 'autopull' from arguments - autopull_command="${1:-status}" - vm_name="${2:-$VM_NAME}" - - log "🔄 Managing auto-pull functionality..." - check_environment - - # Get list of all template VMs - if [[ "$autopull_command" == "status" ]] && [[ "$vm_name" == "$VM_NAME" ]]; then - all_vms=$(run_template_manager list | grep -E "(running|shut off)" | awk '{print $2}' || echo "") - else - all_vms=$vm_name - fi - - if [[ -z "$all_vms" ]]; then - log_warning "No running template VMs found to manage auto-pull on." - exit 0 - fi - - for vm in $all_vms; do - log "====== Auto-pull for VM: $vm ======" - - case "$autopull_command" in - status) - ssh "$vm" "[AWS-SECRET-REMOVED]uto-pull.sh --status" - ;; - enable) - ssh "$vm" "(crontab -l 2>/dev/null || echo \"\") | { cat; echo \"*/10 * * * * [AWS-SECRET-REMOVED]uto-pull.sh >> /home/thrillwiki/logs/cron.log 2>&1\"; } | crontab - && echo '✅ Auto-pull enabled' || echo '❌ Failed to enable'" - ;; - disable) - ssh "$vm" "crontab -l 2>/dev/null | grep -v 'auto-pull.sh' | crontab - && echo '✅ Auto-pull disabled' || echo '❌ Failed to disable'" - ;; - logs) - ssh "$vm" "[AWS-SECRET-REMOVED]uto-pull.sh --logs" - ;; - test) - ssh "$vm" "[AWS-SECRET-REMOVED]uto-pull.sh --force" - ;; - *) - log_error "Invalid auto-pull command: $autopull_command" - show_help - exit 1 - ;; - esac - echo - done - ;; - - --help|-h|help|"") - show_help - ;; - - *) - log_error "Unknown command: ${1:-}" - echo "" - show_help - ;; -esac diff --git a/shared/scripts/unraid/template_manager.py b/shared/scripts/unraid/template_manager.py deleted file mode 100644 index f0641367..00000000 --- a/shared/scripts/unraid/template_manager.py +++ /dev/null @@ -1,571 +0,0 @@ -#!/usr/bin/env python3 -""" -Template VM Manager for ThrillWiki -Handles copying template VM disks and managing template-based deployments. -""" - -import os -import sys -import time -import logging -import subprocess -from typing import Dict - -logger = logging.getLogger(__name__) - - -class TemplateVMManager: - """Manages template-based VM deployment on Unraid.""" - - def __init__(self, unraid_host: str, unraid_user: str = "root"): - self.unraid_host = unraid_host - self.unraid_user = unraid_user - self.template_vm_name = "thrillwiki-template-ubuntu" - self.template_path = f"/mnt/user/domains/{self.template_vm_name}" - - def authenticate(self) -> bool: - """Test SSH connectivity to Unraid server.""" - try: - result = subprocess.run( - f"ssh -o ConnectTimeout=10 {self.unraid_user}@{self.unraid_host} 'echo Connected'", - shell=True, - capture_output=True, - text=True, - timeout=15, - ) - - if result.returncode == 0 and "Connected" in result.stdout: - logger.info("Successfully connected to Unraid via SSH") - return True - else: - logger.error(f"SSH connection failed: {result.stderr}") - return False - except Exception as e: - logger.error(f"SSH authentication error: {e}") - return False - - def check_template_exists(self) -> bool: - """Check if template VM disk exists.""" - try: - result = subprocess.run( - f"ssh {self.unraid_user}@{self.unraid_host} 'test -f {self.template_path}/vdisk1.qcow2'", - shell=True, - capture_output=True, - text=True, - ) - if result.returncode == 0: - logger.info( - f"Template VM disk found at { - self.template_path}/vdisk1.qcow2" - ) - return True - else: - logger.error( - f"Template VM disk not found at { - self.template_path}/vdisk1.qcow2" - ) - return False - except Exception as e: - logger.error(f"Error checking template existence: {e}") - return False - - def get_template_info(self) -> Dict[str, str]: - """Get information about the template VM.""" - try: - # Get disk size - size_result = subprocess.run( - f"ssh { - self.unraid_user}@{ - self.unraid_host} 'qemu-img info { - self.template_path}/vdisk1.qcow2 | grep \"virtual size\"'", - shell=True, - capture_output=True, - text=True, - ) - - # Get file size - file_size_result = subprocess.run( - f"ssh {self.unraid_user}@{self.unraid_host} 'ls -lh {self.template_path}/vdisk1.qcow2'", - shell=True, - capture_output=True, - text=True, - ) - - # Get last modification time - mod_time_result = subprocess.run( - f"ssh {self.unraid_user}@{self.unraid_host} 'stat -c \"%y\" {self.template_path}/vdisk1.qcow2'", - shell=True, - capture_output=True, - text=True, - ) - - info = { - "template_path": f"{ - self.template_path}/vdisk1.qcow2", - "virtual_size": ( - size_result.stdout.strip() - if size_result.returncode == 0 - else "Unknown" - ), - "file_size": ( - file_size_result.stdout.split()[4] - if file_size_result.returncode == 0 - else "Unknown" - ), - "last_modified": ( - mod_time_result.stdout.strip() - if mod_time_result.returncode == 0 - else "Unknown" - ), - } - - return info - - except Exception as e: - logger.error(f"Error getting template info: {e}") - return {} - - def copy_template_disk(self, target_vm_name: str) -> bool: - """Copy template VM disk to a new VM instance.""" - try: - if not self.check_template_exists(): - logger.error("Template VM disk not found. Cannot proceed with copy.") - return False - - target_path = f"/mnt/user/domains/{target_vm_name}" - target_disk = f"{target_path}/vdisk1.qcow2" - - logger.info(f"Copying template disk to new VM: {target_vm_name}") - - # Create target directory - subprocess.run( - f"ssh {self.unraid_user}@{self.unraid_host} 'mkdir -p {target_path}'", - shell=True, - check=True, - ) - - # Check if target disk already exists - disk_check = subprocess.run( - f"ssh {self.unraid_user}@{self.unraid_host} 'test -f {target_disk}'", - shell=True, - capture_output=True, - ) - - if disk_check.returncode == 0: - logger.warning(f"Target disk already exists: {target_disk}") - logger.info( - "Removing existing disk to replace with fresh template copy..." - ) - subprocess.run( - f"ssh {self.unraid_user}@{self.unraid_host} 'rm -f {target_disk}'", - shell=True, - check=True, - ) - - # Copy template disk with rsync progress display - logger.info("🚀 Copying template disk with rsync progress display...") - start_time = time.time() - - # First, get the size of the template disk for progress calculation - size_result = subprocess.run( - f"ssh {self.unraid_user}@{self.unraid_host} 'stat -c%s {self.template_path}/vdisk1.qcow2'", - shell=True, - capture_output=True, - text=True, - ) - - template_size = "unknown size" - if size_result.returncode == 0: - size_bytes = int(size_result.stdout.strip()) - if size_bytes > 1024 * 1024 * 1024: # GB - template_size = f"{size_bytes / - (1024 * - 1024 * - 1024):.1f}GB" - elif size_bytes > 1024 * 1024: # MB - template_size = f"{size_bytes / (1024 * 1024):.1f}MB" - else: - template_size = f"{size_bytes / 1024:.1f}KB" - - logger.info(f"📊 Template disk size: {template_size}") - - # Use rsync with progress display - logger.info("📈 Using rsync for real-time progress display...") - - # Force rsync to output progress to stderr and capture it - copy_cmd = f"ssh { - self.unraid_user}@{ - self.unraid_host} 'rsync -av --progress --stats { - self.template_path}/vdisk1.qcow2 {target_disk}'" - - # Run with real-time output, unbuffered - process = subprocess.Popen( - copy_cmd, - shell=True, - stdout=subprocess.PIPE, - stderr=subprocess.PIPE, - text=True, - bufsize=0, # Unbuffered - universal_newlines=True, - ) - - import select - - # Read both stdout and stderr for progress with real-time display - while True: - # Check if process is still running - if process.poll() is not None: - # Process finished, read any remaining output - remaining_out = process.stdout.read() - remaining_err = process.stderr.read() - if remaining_out: - print(f"📊 {remaining_out.strip()}", flush=True) - logger.info(f"📊 {remaining_out.strip()}") - if remaining_err: - for line in remaining_err.strip().split("\n"): - if line.strip(): - print(f"⚡ {line.strip()}", flush=True) - logger.info(f"⚡ {line.strip()}") - break - - # Use select to check for available data - try: - ready, _, _ = select.select( - [process.stdout, process.stderr], [], [], 0.1 - ) - - for stream in ready: - line = stream.readline() - if line: - line = line.strip() - if line: - if stream == process.stdout: - print(f"📊 {line}", flush=True) - logger.info(f"📊 {line}") - else: # stderr - # rsync progress goes to stderr - if any( - keyword in line - for keyword in [ - "%", - "bytes/sec", - "to-check=", - "xfr#", - ] - ): - print(f"⚡ {line}", flush=True) - logger.info(f"⚡ {line}") - else: - print(f"📋 {line}", flush=True) - logger.info(f"📋 {line}") - except select.error: - # Fallback for systems without select (like some Windows - # environments) - print( - "⚠️ select() not available, using fallback method...", - flush=True, - ) - logger.info("⚠️ select() not available, using fallback method...") - - # Simple fallback - just wait and read what's available - time.sleep(0.5) - try: - # Try to read non-blocking - import fcntl - import os - - # Make stdout/stderr non-blocking - fd_out = process.stdout.fileno() - fd_err = process.stderr.fileno() - fl_out = fcntl.fcntl(fd_out, fcntl.F_GETFL) - fl_err = fcntl.fcntl(fd_err, fcntl.F_GETFL) - fcntl.fcntl(fd_out, fcntl.F_SETFL, fl_out | os.O_NONBLOCK) - fcntl.fcntl(fd_err, fcntl.F_SETFL, fl_err | os.O_NONBLOCK) - - try: - out_line = process.stdout.readline() - if out_line: - print(f"📊 {out_line.strip()}", flush=True) - logger.info(f"📊 {out_line.strip()}") - except BaseException: - pass - - try: - err_line = process.stderr.readline() - if err_line: - if any( - keyword in err_line - for keyword in [ - "%", - "bytes/sec", - "to-check=", - "xfr#", - ] - ): - print(f"⚡ {err_line.strip()}", flush=True) - logger.info(f"⚡ {err_line.strip()}") - else: - print(f"📋 {err_line.strip()}", flush=True) - logger.info(f"📋 {err_line.strip()}") - except BaseException: - pass - except ImportError: - # If fcntl not available, just continue - print( - "📊 Progress display limited - continuing copy...", - flush=True, - ) - logger.info("📊 Progress display limited - continuing copy...") - break - - copy_result_code = process.wait() - - end_time = time.time() - copy_time = end_time - start_time - - if copy_result_code == 0: - logger.info( - f"✅ Template disk copied successfully in { - copy_time:.1f} seconds" - ) - logger.info(f"🎯 New VM disk created: {target_disk}") - - # Verify the copy by checking file size - verify_result = subprocess.run( - f"ssh {self.unraid_user}@{self.unraid_host} 'ls -lh {target_disk}'", - shell=True, - capture_output=True, - text=True, - ) - - if verify_result.returncode == 0: - file_info = verify_result.stdout.strip().split() - if len(file_info) >= 5: - copied_size = file_info[4] - logger.info(f"📋 Copied disk size: {copied_size}") - - return True - else: - logger.error( - f"❌ Failed to copy template disk (exit code: {copy_result_code})" - ) - logger.error("Check Unraid server disk space and permissions") - return False - - except Exception as e: - logger.error(f"Error copying template disk: {e}") - return False - - def prepare_vm_from_template( - self, target_vm_name: str, vm_memory: int, vm_vcpus: int, vm_ip: str - ) -> bool: - """Complete template-based VM preparation.""" - try: - logger.info(f"Preparing VM '{target_vm_name}' from template...") - - # Step 1: Copy template disk - if not self.copy_template_disk(target_vm_name): - return False - - logger.info(f"VM '{target_vm_name}' prepared successfully from template") - logger.info("The VM disk is ready with Ubuntu pre-installed") - logger.info("You can now create the VM configuration and start it") - - return True - - except Exception as e: - logger.error(f"Error preparing VM from template: {e}") - return False - - def update_template(self) -> bool: - """Update the template VM with latest changes.""" - try: - logger.info("Updating template VM...") - logger.info("Note: This should be done manually by:") - logger.info("1. Starting the template VM") - logger.info("2. Updating Ubuntu packages") - logger.info("3. Updating ThrillWiki dependencies") - logger.info("4. Stopping the template VM") - logger.info("5. The disk will automatically be the new template") - - # Check template VM status - template_status = subprocess.run( - f"ssh {self.unraid_user}@{self.unraid_host} 'virsh domstate {self.template_vm_name}'", - shell=True, - capture_output=True, - text=True, - ) - - if template_status.returncode == 0: - status = template_status.stdout.strip() - logger.info( - f"Template VM '{ - self.template_vm_name}' status: {status}" - ) - - if status == "running": - logger.warning("Template VM is currently running!") - logger.warning("Stop the template VM when updates are complete") - logger.warning("Running VMs should not be used as templates") - return False - elif status in ["shut off", "shutoff"]: - logger.info( - "Template VM is properly stopped and ready to use as template" - ) - return True - else: - logger.warning(f"Template VM in unexpected state: {status}") - return False - else: - logger.error("Could not check template VM status") - return False - - except Exception as e: - logger.error(f"Error updating template: {e}") - return False - - def list_template_instances(self) -> list: - """List all VMs that were created from the template.""" - try: - # Get all domains - result = subprocess.run( - f"ssh {self.unraid_user}@{self.unraid_host} 'virsh list --all --name'", - shell=True, - capture_output=True, - text=True, - ) - - if result.returncode != 0: - logger.error("Failed to list VMs") - return [] - - all_vms = result.stdout.strip().split("\n") - - # Filter for thrillwiki VMs (excluding template) - template_instances = [] - for vm in all_vms: - vm = vm.strip() - if vm and "thrillwiki" in vm.lower() and vm != self.template_vm_name: - # Get VM status - status_result = subprocess.run( - f"ssh {self.unraid_user}@{self.unraid_host} 'virsh domstate {vm}'", - shell=True, - capture_output=True, - text=True, - ) - status = ( - status_result.stdout.strip() - if status_result.returncode == 0 - else "unknown" - ) - template_instances.append({"name": vm, "status": status}) - - return template_instances - - except Exception as e: - logger.error(f"Error listing template instances: {e}") - return [] - - -def main(): - """Main entry point for template manager.""" - import argparse - - parser = argparse.ArgumentParser( - description="ThrillWiki Template VM Manager", - epilog=""" -Examples: - python template_manager.py info # Show template info - python template_manager.py copy my-vm # Copy template to new VM - python template_manager.py list # List template instances - python template_manager.py update # Update template VM - """, - formatter_class=argparse.RawDescriptionHelpFormatter, - ) - - parser.add_argument( - "action", - choices=["info", "copy", "list", "update", "check"], - help="Action to perform", - ) - - parser.add_argument("vm_name", nargs="?", help="VM name (required for copy action)") - - args = parser.parse_args() - - # Get Unraid connection details from environment - unraid_host = os.environ.get("UNRAID_HOST") - unraid_user = os.environ.get("UNRAID_USER", "root") - - if not unraid_host: - logger.error("UNRAID_HOST environment variable is required") - sys.exit(1) - - # Create template manager - template_manager = TemplateVMManager(unraid_host, unraid_user) - - # Authenticate - if not template_manager.authenticate(): - logger.error("Failed to connect to Unraid server") - sys.exit(1) - - if args.action == "info": - logger.info("📋 Template VM Information") - info = template_manager.get_template_info() - if info: - print(f"Template Path: {info['template_path']}") - print(f"Virtual Size: {info['virtual_size']}") - print(f"File Size: {info['file_size']}") - print(f"Last Modified: {info['last_modified']}") - else: - print("❌ Failed to get template information") - sys.exit(1) - - elif args.action == "check": - if template_manager.check_template_exists(): - logger.info("✅ Template VM disk exists and is ready to use") - sys.exit(0) - else: - logger.error("❌ Template VM disk not found") - sys.exit(1) - - elif args.action == "copy": - if not args.vm_name: - logger.error("VM name is required for copy action") - sys.exit(1) - - success = template_manager.copy_template_disk(args.vm_name) - sys.exit(0 if success else 1) - - elif args.action == "list": - logger.info("📋 Template-based VM Instances") - instances = template_manager.list_template_instances() - if instances: - for instance in instances: - status_emoji = ( - "🟢" - if instance["status"] == "running" - else "🔴" if instance["status"] == "shut off" else "🟡" - ) - print( - f"{status_emoji} { - instance['name']} ({ - instance['status']})" - ) - else: - print("No template instances found") - - elif args.action == "update": - success = template_manager.update_template() - sys.exit(0 if success else 1) - - -if __name__ == "__main__": - # Setup logging - logging.basicConfig( - level=logging.INFO, - format="%(asctime)s - %(levelname)s - %(message)s", - handlers=[logging.StreamHandler()], - ) - - main() diff --git a/shared/scripts/unraid/thrillwiki-vm-template-simple.xml b/shared/scripts/unraid/thrillwiki-vm-template-simple.xml deleted file mode 100644 index 89be074c..00000000 --- a/shared/scripts/unraid/thrillwiki-vm-template-simple.xml +++ /dev/null @@ -1,116 +0,0 @@ - - - {VM_NAME} - {VM_UUID} - - - - {VM_MEMORY_KIB} - {VM_MEMORY_KIB} - {VM_VCPUS} - - hvm - /usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd - /etc/libvirt/qemu/nvram/{VM_UUID}_VARS-pure-efi.fd - - - - - - - - - - - - - - - - - - destroy - restart - restart - - - - - - /usr/local/sbin/qemu - - - - - -
- - -
- - - - - -
- - - - -
- - - - -
- - - - -
- - - - -
- - -
- - - - - -
- - - - - - - - - - - -
- - -
- - - - - - -