mirror of
https://github.com/pacnpal/thrillwiki_django_no_react.git
synced 2025-12-20 04:31:09 -05:00
feat: Implement Entity Suggestion Manager and Modal components
- Added EntitySuggestionManager.vue to manage entity suggestions and authentication. - Created EntitySuggestionModal.vue for displaying suggestions and adding new entities. - Integrated AuthManager for user authentication within the suggestion modal. - Enhanced signal handling in start-servers.sh for graceful shutdown of servers. - Improved server startup script to ensure proper cleanup and responsiveness to termination signals. - Added documentation for signal handling fixes and usage instructions.
This commit is contained in:
111
.clinerules
111
.clinerules
@@ -1,83 +1,44 @@
|
||||
# Project Startup Rules
|
||||
# Project Startup & Development Rules
|
||||
|
||||
## Development Server
|
||||
IMPORTANT: Always follow these instructions exactly when starting the development server:
|
||||
|
||||
FIRST, assume the server is running. Always. Assume the changes have taken effect.
|
||||
|
||||
IF THERE IS AN ISSUE WITH THE SERVER, run the following command exactly:
|
||||
## Server & Package Management
|
||||
- **Starting the Dev Server:** Always assume the server is running and changes have taken effect. If issues arise, run:
|
||||
```bash
|
||||
lsof -ti :8000 | xargs kill -9; find . -type d -name "__pycache__" -exec rm -r {} +; cd backend && uv run manage.py runserver_plus && cd ../frontend && pnpm run dev
|
||||
|
||||
Note: These steps must be executed in this exact order to ensure consistent behavior. If server does not start correctly, fix the error in accordance with the error details as best you can.
|
||||
|
||||
## Package Management
|
||||
IMPORTANT: When a Python package is needed, only use UV to add it:
|
||||
```bash
|
||||
uv add <package>
|
||||
$PROJECT_ROOT/shared/scripts/start-servers.sh
|
||||
```
|
||||
Do not attempt to install packages using any other method.
|
||||
|
||||
## Django Management Commands
|
||||
IMPORTANT: When running any Django manage.py commands (migrations, shell, etc.), always use UV:
|
||||
- **Python Packages:** Only use UV to add packages:
|
||||
```bash
|
||||
uv run manage.py <command>
|
||||
cd $PROJECT_ROOT/backend && uv add <package>
|
||||
```
|
||||
This applies to all management commands including but not limited to:
|
||||
- Making migrations: `uv run manage.py makemigrations`
|
||||
- Applying migrations: `uv run manage.py migrate`
|
||||
- Creating superuser: `uv run manage.py createsuperuser` and possible echo commands before for the necessary data input.
|
||||
- Starting shell: `uv run manage.py shell` and possible echo commands before for the necessary data input.
|
||||
- **Django Commands:** Always use `uv run manage.py <command>` for all management tasks (migrations, shell, superuser, etc.). Never use `python manage.py` or `uv run python manage.py`.
|
||||
|
||||
NEVER use `python manage.py` or `uv run python manage.py`. Always use `uv run manage.py` directly.
|
||||
## Frontend API URL Rules
|
||||
- **Vite Proxy:** Always check `frontend/vite.config.ts` for proxy rules before changing frontend API URLs.
|
||||
- **URL Flow:** Understand how frontend URLs are rewritten by Vite proxy (e.g., `/api/auth/login/` → `/api/v1/auth/login/`).
|
||||
- **Verification:** Confirm proxy behavior via config and browser network tab. Only change URLs if proxy is NOT handling rewriting.
|
||||
- **Common Mistake:** Don’t assume frontend URLs are wrong due to proxy configuration.
|
||||
|
||||
## Entity Relationship Rules
|
||||
IMPORTANT: Follow these entity relationship patterns consistently:
|
||||
## Entity Relationship Patterns
|
||||
- **Park:** Must have Operator (required), may have PropertyOwner (optional), cannot reference Company directly.
|
||||
- **Ride:** Must belong to Park, may have Manufacturer/Designer (optional), cannot reference Company directly.
|
||||
- **Entities:**
|
||||
- Operators: Operate parks.
|
||||
- PropertyOwners: Own park property (optional).
|
||||
- Manufacturers: Make rides.
|
||||
- Designers: Design rides.
|
||||
- All entities can have locations.
|
||||
- **Constraints:** Operator and PropertyOwner can be same or different. Manufacturers and Designers are distinct. Use proper foreign keys with correct null/blank settings.
|
||||
|
||||
# Park Relationships
|
||||
- Parks MUST have an Operator (required relationship)
|
||||
- Parks MAY have a PropertyOwner (optional, usually same as Operator)
|
||||
- Parks CANNOT directly reference Company entities
|
||||
|
||||
# Ride Relationships
|
||||
- Rides MUST belong to a Park (required relationship)
|
||||
- Rides MAY have a Manufacturer (optional relationship)
|
||||
- Rides MAY have a Designer (optional relationship)
|
||||
- Rides CANNOT directly reference Company entities
|
||||
|
||||
# Entity Definitions
|
||||
- Operators: Companies that operate theme parks (replaces Company.owner)
|
||||
- PropertyOwners: Companies that own park property (new concept, optional)
|
||||
- Manufacturers: Companies that manufacture rides (replaces Company for rides)
|
||||
- Designers: Companies/individuals that design rides (existing concept)
|
||||
- IMPORTANT: All entities can have locations.
|
||||
|
||||
# Relationship Constraints
|
||||
- Operator and PropertyOwner are usually the same entity but CAN be different
|
||||
- Manufacturers and Designers are distinct concepts and should not be conflated
|
||||
- All entity relationships should use proper foreign keys with appropriate null/blank settings
|
||||
|
||||
- You are to NEVER assume that blank output means your fixes were correct. That assumption can lead to further issues down the line.
|
||||
- ALWAYS verify your changes by testing the affected functionality thoroughly.
|
||||
- ALWAYS use context7 to check documentation when troubleshooting. It contains VITAL documentation for any and all frameworks, modules, and packages.
|
||||
- ALWAYS document your code changes with conport and the reasoning behind them.
|
||||
- ALWAYS include relevant context and information when making changes to the codebase.
|
||||
- ALWAYS ensure that your code changes are properly tested and validated before deployment.
|
||||
- ALWAYS communicate clearly and effectively with your team about any changes you make.
|
||||
- ALWAYS be open to feedback and willing to make adjustments as necessary.
|
||||
- ALWAYS strive for continuous improvement in your work and processes.
|
||||
- ALWAYS prioritize code readability and maintainability.
|
||||
- ALWAYS keep security best practices in mind when developing and reviewing code.
|
||||
- ALWAYS consider performance implications when making changes to the codebase.
|
||||
- ALWAYS be mindful of the impact of your changes on the overall system architecture.
|
||||
- ALWAYS keep scalability in mind when designing new features or modifying existing ones.
|
||||
- ALWAYS consider the potential for code reuse and modularity in your designs.
|
||||
- ALWAYS document your code with clear and concise comments.
|
||||
- ALWAYS keep your code DRY (Don't Repeat Yourself) by abstracting common functionality into reusable components.
|
||||
- ALWAYS use meaningful variable and function names to improve code readability.
|
||||
- ALWAYS handle errors and exceptions gracefully to improve the user experience.
|
||||
- ALWAYS log important events and errors for troubleshooting purposes.
|
||||
- ALWAYS consider if there may be an existing module or package that can be leveraged before creating new functionality from scratch.
|
||||
- ALWAYS keep documentation up to date with any code changes.
|
||||
- ALWAYS consider if there are any potential security vulnerabilities in your code.
|
||||
- ALWAYS consider if there are any potential performance bottlenecks in your code.
|
||||
## General Best Practices
|
||||
- Never assume blank output means success—always verify changes by testing.
|
||||
- Use context7 for documentation when troubleshooting.
|
||||
- Document changes with conport and reasoning.
|
||||
- Include relevant context and information in all changes.
|
||||
- Test and validate code before deployment.
|
||||
- Communicate changes clearly with your team.
|
||||
- Be open to feedback and continuous improvement.
|
||||
- Prioritize readability, maintainability, security, performance, scalability, and modularity.
|
||||
- Use meaningful names, DRY principles, clear comments, and handle errors gracefully.
|
||||
- Log important events/errors for troubleshooting.
|
||||
- Prefer existing modules/packages over new code.
|
||||
- Keep documentation up to date.
|
||||
- Consider security vulnerabilities and performance bottlenecks in all changes.
|
||||
|
||||
390
.roo/rules/roo_code_conport_strategy
Normal file
390
.roo/rules/roo_code_conport_strategy
Normal file
@@ -0,0 +1,390 @@
|
||||
# --- ConPort Memory Strategy ---
|
||||
conport_memory_strategy:
|
||||
# CRITICAL: At the beginning of every session, the agent MUST execute the 'initialization' sequence
|
||||
# to determine the ConPort status and load relevant context.
|
||||
workspace_id_source: "The agent must obtain the absolute path to the current workspace to use as `workspace_id` for all ConPort tool calls. This might be available as `${workspaceFolder}` or require asking the user."
|
||||
|
||||
initialization:
|
||||
thinking_preamble: |
|
||||
|
||||
agent_action_plan:
|
||||
- step: 1
|
||||
action: "Determine `ACTUAL_WORKSPACE_ID`."
|
||||
- step: 2
|
||||
action: "Invoke `list_files` for `ACTUAL_WORKSPACE_ID + \"/context_portal/\"`."
|
||||
tool_to_use: "list_files"
|
||||
parameters: "path: ACTUAL_WORKSPACE_ID + \"/context_portal/\""
|
||||
- step: 3
|
||||
action: "Analyze result and branch based on 'context.db' existence."
|
||||
conditions:
|
||||
- if: "'context.db' is found"
|
||||
then_sequence: "load_existing_conport_context"
|
||||
- else: "'context.db' NOT found"
|
||||
then_sequence: "handle_new_conport_setup"
|
||||
|
||||
load_existing_conport_context:
|
||||
thinking_preamble: |
|
||||
|
||||
agent_action_plan:
|
||||
- step: 1
|
||||
description: "Attempt to load initial contexts from ConPort."
|
||||
actions:
|
||||
- "Invoke `get_product_context`... Store result."
|
||||
- "Invoke `get_active_context`... Store result."
|
||||
- "Invoke `get_decisions` (limit 5 for a better overview)... Store result."
|
||||
- "Invoke `get_progress` (limit 5)... Store result."
|
||||
- "Invoke `get_system_patterns` (limit 5)... Store result."
|
||||
- "Invoke `get_custom_data` (category: \"critical_settings\")... Store result."
|
||||
- "Invoke `get_custom_data` (category: \"ProjectGlossary\")... Store result."
|
||||
- "Invoke `get_recent_activity_summary` (default params, e.g., last 24h, limit 3 per type) for a quick catch-up. Store result."
|
||||
- step: 2
|
||||
description: "Analyze loaded context."
|
||||
conditions:
|
||||
- if: "results from step 1 are NOT empty/minimal"
|
||||
actions:
|
||||
- "Set internal status to [CONPORT_ACTIVE]."
|
||||
- "Inform user: \"ConPort memory initialized. Existing contexts and recent activity loaded.\""
|
||||
- "Use `ask_followup_question` with suggestions like \"Review recent activity?\", \"Continue previous task?\", \"What would you like to work on?\"."
|
||||
- else: "loaded context is empty/minimal despite DB file existing"
|
||||
actions:
|
||||
- "Set internal status to [CONPORT_ACTIVE]."
|
||||
- "Inform user: \"ConPort database file found, but it appears to be empty or minimally initialized. You can start by defining Product/Active Context or logging project information.\""
|
||||
- "Use `ask_followup_question` with suggestions like \"Define Product Context?\", \"Log a new decision?\"."
|
||||
- step: 3
|
||||
description: "Handle Load Failure (if step 1's `get_*` calls failed)."
|
||||
condition: "If any `get_*` calls in step 1 failed unexpectedly"
|
||||
action: "Fall back to `if_conport_unavailable_or_init_failed`."
|
||||
|
||||
handle_new_conport_setup:
|
||||
thinking_preamble: |
|
||||
|
||||
agent_action_plan:
|
||||
- step: 1
|
||||
action: "Inform user: \"No existing ConPort database found at `ACTUAL_WORKSPACE_ID + \"/context_portal/context.db\"`.\""
|
||||
- step: 2
|
||||
action: "Use `ask_followup_question`."
|
||||
tool_to_use: "ask_followup_question"
|
||||
parameters:
|
||||
question: "Would you like to initialize a new ConPort database for this workspace? The database will be created automatically when ConPort tools are first used."
|
||||
suggestions:
|
||||
- "Yes, initialize a new ConPort database."
|
||||
- "No, do not use ConPort for this session."
|
||||
- step: 3
|
||||
description: "Process user response."
|
||||
conditions:
|
||||
- if_user_response_is: "Yes, initialize a new ConPort database."
|
||||
actions:
|
||||
- "Inform user: \"Okay, a new ConPort database will be created.\""
|
||||
- description: "Attempt to bootstrap Product Context from projectBrief.md (this happens only on new setup)."
|
||||
thinking_preamble: |
|
||||
|
||||
sub_steps:
|
||||
- "Invoke `list_files` with `path: ACTUAL_WORKSPACE_ID` (non-recursive, just to check root)."
|
||||
- description: "Analyze `list_files` result for 'projectBrief.md'."
|
||||
conditions:
|
||||
- if: "'projectBrief.md' is found in the listing"
|
||||
actions:
|
||||
- "Invoke `read_file` for `ACTUAL_WORKSPACE_ID + \"/projectBrief.md\"`."
|
||||
- action: "Use `ask_followup_question`."
|
||||
tool_to_use: "ask_followup_question"
|
||||
parameters:
|
||||
question: "Found projectBrief.md in your workspace. As we're setting up ConPort for the first time, would you like to import its content into the Product Context?"
|
||||
suggestions:
|
||||
- "Yes, import its content now."
|
||||
- "No, skip importing it for now."
|
||||
- description: "Process user response to import projectBrief.md."
|
||||
conditions:
|
||||
- if_user_response_is: "Yes, import its content now."
|
||||
actions:
|
||||
- "(No need to `get_product_context` as DB is new and empty)"
|
||||
- "Prepare `content` for `update_product_context`. For example: `{\"initial_product_brief\": \"[content from projectBrief.md]\"}`."
|
||||
- "Invoke `update_product_context` with the prepared content."
|
||||
- "Inform user of the import result (success or failure)."
|
||||
- else: "'projectBrief.md' NOT found"
|
||||
actions:
|
||||
- action: "Use `ask_followup_question`."
|
||||
tool_to_use: "ask_followup_question"
|
||||
parameters:
|
||||
question: "`projectBrief.md` was not found in the workspace root. Would you like to define the initial Product Context manually now?"
|
||||
suggestions:
|
||||
- "Define Product Context manually."
|
||||
- "Skip for now."
|
||||
- "(If \"Define manually\", guide user through `update_product_context`)."
|
||||
- "Proceed to 'load_existing_conport_context' sequence (which will now load the potentially bootstrapped product context and other empty contexts)."
|
||||
- if_user_response_is: "No, do not use ConPort for this session."
|
||||
action: "Proceed to `if_conport_unavailable_or_init_failed` (with a message indicating user chose not to initialize)."
|
||||
|
||||
if_conport_unavailable_or_init_failed:
|
||||
thinking_preamble: |
|
||||
|
||||
agent_action: "Inform user: \"ConPort memory will not be used for this session. Status: [CONPORT_INACTIVE].\""
|
||||
|
||||
general:
|
||||
status_prefix: "Begin EVERY response with either '[CONPORT_ACTIVE]' or '[CONPORT_INACTIVE]'."
|
||||
proactive_logging_cue: "Remember to proactively identify opportunities to log or update ConPort based on the conversation (e.g., if user outlines a new plan, consider logging decisions or progress). Confirm with the user before logging."
|
||||
proactive_error_handling: "When encountering errors (e.g., tool failures, unexpected output), proactively log the error details using `log_custom_data` (category: 'ErrorLogs', key: 'timestamp_error_summary') and consider updating `active_context` with `open_issues` if it's a persistent problem. Prioritize using ConPort's `get_item_history` or `get_recent_activity_summary` to diagnose issues if they relate to past context changes."
|
||||
semantic_search_emphasis: "For complex or nuanced queries, especially when direct keyword search (`search_decisions_fts`, `search_custom_data_value_fts`) might be insufficient, prioritize using `semantic_search_conport` to leverage conceptual understanding and retrieve more relevant context. Explain to the user why semantic search is being used."
|
||||
|
||||
conport_updates:
|
||||
frequency: "UPDATE CONPORT THROUGHOUT THE CHAT SESSION, WHEN SIGNIFICANT CHANGES OCCUR, OR WHEN EXPLICITLY REQUESTED."
|
||||
workspace_id_note: "All ConPort tool calls require the `workspace_id`."
|
||||
tools:
|
||||
- name: get_product_context
|
||||
trigger: "To understand the overall project goals, features, or architecture at any time."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `get_product_context` (`{"workspace_id": "..."}`). Result is a direct dictionary.
|
||||
- name: update_product_context
|
||||
trigger: "When the high-level project description, goals, features, or overall architecture changes significantly, as confirmed by the user."
|
||||
action_description: |
|
||||
<thinking>
|
||||
- Product context needs updating.
|
||||
- Step 1: (Optional but recommended if unsure of current state) Invoke `get_product_context`.
|
||||
- Step 2: Prepare the `content` (for full overwrite) or `patch_content` (partial update) dictionary.
|
||||
- To remove a key using `patch_content`, set its value to the special string sentinel `\"__DELETE__\"`.
|
||||
- Confirm changes with the user.
|
||||
</thinking>
|
||||
# Agent Action: Invoke `update_product_context` (`{"workspace_id": "...", "content": {...}}` or `{"workspace_id": "...", "patch_content": {"key_to_update": "new_value", "key_to_delete": "__DELETE__"}}`).
|
||||
- name: get_active_context
|
||||
trigger: "To understand the current task focus, immediate goals, or session-specific context."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `get_active_context` (`{"workspace_id": "..."}`). Result is a direct dictionary.
|
||||
- name: update_active_context
|
||||
trigger: "When the current focus of work changes, new questions arise, or session-specific context needs updating (e.g., `current_focus`, `open_issues`), as confirmed by the user."
|
||||
action_description: |
|
||||
<thinking>
|
||||
- Active context needs updating.
|
||||
- Step 1: (Optional) Invoke `get_active_context` to retrieve the current state.
|
||||
- Step 2: Prepare `content` (for full overwrite) or `patch_content` (for partial update).
|
||||
- Common fields to update include `current_focus`, `open_issues`, and other session-specific data.
|
||||
- To remove a key using `patch_content`, set its value to the special string sentinel `\"__DELETE__\"`.
|
||||
- Confirm changes with the user.
|
||||
</thinking>
|
||||
# Agent Action: Invoke `update_active_context` (`{"workspace_id": "...", "content": {...}}` or `{"workspace_id": "...", "patch_content": {"current_focus": "new_focus", "open_issues": ["issue1", "issue2"], "key_to_delete": "__DELETE__"}}`).
|
||||
- name: log_decision
|
||||
trigger: "When a significant architectural or implementation decision is made and confirmed by the user."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `log_decision` (`{"workspace_id": "...", "summary": "...", "rationale": "...", "tags": ["optional_tag"]}}`).
|
||||
- name: get_decisions
|
||||
trigger: "To retrieve a list of past decisions, e.g., to review history or find a specific decision."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `get_decisions` (`{"workspace_id": "...", "limit": N, "tags_filter_include_all": ["tag1"], "tags_filter_include_any": ["tag2"]}}`). Explain optional filters.
|
||||
- name: search_decisions_fts
|
||||
trigger: "When searching for decisions by keywords in summary, rationale, details, or tags, and basic `get_decisions` is insufficient."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `search_decisions_fts` (`{"workspace_id": "...", "query_term": "search keywords", "limit": N}}`).
|
||||
- name: delete_decision_by_id
|
||||
trigger: "When user explicitly confirms deletion of a specific decision by its ID."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `delete_decision_by_id` (`{"workspace_id": "...", "decision_id": ID}}`). Emphasize prior confirmation.
|
||||
- name: log_progress
|
||||
trigger: "When a task begins, its status changes (e.g., TODO, IN_PROGRESS, DONE), or it's completed. Also when a new sub-task is defined."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `log_progress` (`{"workspace_id": "...", "description": "...", "status": "...", "linked_item_type": "...", "linked_item_id": "..."}}`). Note: 'summary' was changed to 'description' for log_progress.
|
||||
- name: get_progress
|
||||
trigger: "To review current task statuses, find pending tasks, or check history of progress."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `get_progress` (`{"workspace_id": "...", "status_filter": "...", "parent_id_filter": ID, "limit": N}}`).
|
||||
- name: update_progress
|
||||
trigger: "Updates an existing progress entry."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `update_progress` (`{"workspace_id": "...", "progress_id": ID, "status": "...", "description": "...", "parent_id": ID}}`).
|
||||
- name: delete_progress_by_id
|
||||
trigger: "Deletes a progress entry by its ID."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `delete_progress_by_id` (`{"workspace_id": "...", "progress_id": ID}}`).
|
||||
- name: log_system_pattern
|
||||
trigger: "When new architectural patterns are introduced, or existing ones are modified, as confirmed by the user."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `log_system_pattern` (`{"workspace_id": "...", "name": "...", "description": "...", "tags": ["optional_tag"]}}`).
|
||||
- name: get_system_patterns
|
||||
trigger: "To retrieve a list of defined system patterns."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `get_system_patterns` (`{"workspace_id": "...", "tags_filter_include_all": ["tag1"], "limit": N}}`). Note: limit was not in original example, added for consistency.
|
||||
- name: delete_system_pattern_by_id
|
||||
trigger: "When user explicitly confirms deletion of a specific system pattern by its ID."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `delete_system_pattern_by_id` (`{"workspace_id": "...", "pattern_id": ID}}`). Emphasize prior confirmation.
|
||||
- name: log_custom_data
|
||||
trigger: "To store any other type of structured or unstructured project-related information not covered by other tools (e.g., glossary terms, technical specs, meeting notes), as confirmed by the user."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `log_custom_data` (`{"workspace_id": "...", "category": "...", "key": "...", "value": {... or "string"}}`). Note: 'metadata' field is not part of log_custom_data args.
|
||||
- name: get_custom_data
|
||||
trigger: "To retrieve specific custom data by category and key."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `get_custom_data` (`{"workspace_id": "...", "category": "...", "key": "..."}}`).
|
||||
- name: delete_custom_data
|
||||
trigger: "When user explicitly confirms deletion of specific custom data by category and key."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `delete_custom_data` (`{"workspace_id": "...", "category": "...", "key": "..."}}`). Emphasize prior confirmation.
|
||||
- name: search_custom_data_value_fts
|
||||
trigger: "When searching for specific terms within any custom data values, categories, or keys."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `search_custom_data_value_fts` (`{"workspace_id": "...", "query_term": "...", "category_filter": "...", "limit": N}}`).
|
||||
- name: search_project_glossary_fts
|
||||
trigger: "When specifically searching for terms within the 'ProjectGlossary' custom data category."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `search_project_glossary_fts` (`{"workspace_id": "...", "query_term": "...", "limit": N}}`).
|
||||
- name: semantic_search_conport
|
||||
trigger: "When a natural language query requires conceptual understanding beyond keyword matching, or when direct keyword searches are insufficient."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `semantic_search_conport` (`{"workspace_id": "...", "query_text": "...", "top_k": N, "filter_item_types": ["decision", "custom_data"]}}`). Explain filters.
|
||||
- name: link_conport_items
|
||||
trigger: "When a meaningful relationship is identified and confirmed between two existing ConPort items (e.g., a decision is implemented by a system pattern, a progress item tracks a decision)."
|
||||
action_description: |
|
||||
<thinking>
|
||||
- Need to link two items. Identify source type/ID, target type/ID, and relationship.
|
||||
- Common relationship_types: 'implements', 'related_to', 'tracks', 'blocks', 'clarifies', 'depends_on'. Propose a suitable one or ask user.
|
||||
</thinking>
|
||||
# Agent Action: Invoke `link_conport_items` (`{"workspace_id":"...", "source_item_type":"...", "source_item_id":"...", "target_item_type":"...", "target_item_id":"...", "relationship_type":"...", "description":"Optional notes"}`).
|
||||
- name: get_linked_items
|
||||
trigger: "To understand the relationships of a specific ConPort item, or to explore the knowledge graph around an item."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `get_linked_items` (`{"workspace_id":"...", "item_type":"...", "item_id":"...", "relationship_type_filter":"...", "linked_item_type_filter":"...", "limit":N}`).
|
||||
- name: get_item_history
|
||||
trigger: "When needing to review past versions of Product Context or Active Context, or to see when specific changes were made."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `get_item_history` (`{"workspace_id":"...", "item_type":"product_context" or "active_context", "limit":N, "version":V, "before_timestamp":"ISO_DATETIME", "after_timestamp":"ISO_DATETIME"}`).
|
||||
- name: batch_log_items
|
||||
trigger: "When the user provides a list of multiple items of the SAME type (e.g., several decisions, multiple new glossary terms) to be logged at once."
|
||||
action_description: |
|
||||
<thinking>
|
||||
- User provided multiple items. Verify they are of the same loggable type.
|
||||
- Construct the `items` list, where each element is a dictionary of arguments for the single-item log tool (e.g., for `log_decision`).
|
||||
</thinking>
|
||||
# Agent Action: Invoke `batch_log_items` (`{"workspace_id":"...", "item_type":"decision", "items": [{"summary":"...", "rationale":"..."}, {"summary":"..."}] }`).
|
||||
- name: get_recent_activity_summary
|
||||
trigger: "At the start of a new session to catch up, or when the user asks for a summary of recent project activities."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `get_recent_activity_summary` (`{"workspace_id":"...", "hours_ago":H, "since_timestamp":"ISO_DATETIME", "limit_per_type":N}`). Explain default if no time args.
|
||||
- name: get_conport_schema
|
||||
trigger: "If there's uncertainty about available ConPort tools or their arguments during a session (internal LLM check), or if an advanced user specifically asks for the server's tool schema."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `get_conport_schema` (`{"workspace_id":"..."}`). Primarily for internal LLM reference or direct user request.
|
||||
- name: export_conport_to_markdown
|
||||
trigger: "When the user requests to export the current ConPort data to markdown files (e.g., for backup, sharing, or version control)."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `export_conport_to_markdown` (`{"workspace_id":"...", "output_path":"optional/relative/path"}`). Explain default output path if not provided.
|
||||
- name: import_markdown_to_conport
|
||||
trigger: "When the user requests to import ConPort data from a directory of markdown files previously exported by this system."
|
||||
action_description: |
|
||||
# Agent Action: Invoke `import_markdown_to_conport` (`{"workspace_id":"...", "input_path":"optional/relative/path"}`). Explain default input path. Warn about potential overwrites or merges if data already exists.
|
||||
- name: reconfigure_core_guidance
|
||||
type: guidance
|
||||
product_active_context: "The internal JSON structure of 'Product Context' and 'Active Context' (the `content` field) is flexible. Work with the user to define and evolve this structure via `update_product_context` and `update_active_context`. The server stores this `content` as a JSON blob."
|
||||
decisions_progress_patterns: "The fundamental fields for Decisions, Progress, and System Patterns are fixed by ConPort's tools. For significantly different structures or additional fields, guide the user to create a new custom context category using `log_custom_data` (e.g., category: 'project_milestones_detailed')."
|
||||
|
||||
conport_sync_routine:
|
||||
trigger: "^(Sync ConPort|ConPort Sync)$"
|
||||
user_acknowledgement_text: "[CONPORT_SYNCING]"
|
||||
instructions:
|
||||
- "Halt Current Task: Stop current activity."
|
||||
- "Acknowledge Command: Send `[CONPORT_SYNCING]` to the user."
|
||||
- "Review Chat History: Analyze the complete current chat session for new information, decisions, progress, context changes, clarifications, and potential new relationships between items."
|
||||
core_update_process:
|
||||
thinking_preamble: |
|
||||
- Synchronize ConPort with information from the current chat session.
|
||||
- Use appropriate ConPort tools based on identified changes.
|
||||
- For `update_product_context` and `update_active_context`, first fetch current content, then merge/update (potentially using `patch_content`), then call the update tool with the *complete new content object* or the patch.
|
||||
- All tool calls require the `workspace_id`.
|
||||
agent_action_plan_illustrative:
|
||||
- "Log new decisions (use `log_decision`)."
|
||||
- "Log task progress/status changes (use `log_progress`)."
|
||||
- "Update existing progress entries (use `update_progress`)."
|
||||
- "Delete progress entries (use `delete_progress_by_id`)."
|
||||
- "Log new system patterns (use `log_system_pattern`)."
|
||||
- "Update Active Context (use `get_active_context` then `update_active_context` with full or patch)."
|
||||
- "Update Product Context if significant changes (use `get_product_context` then `update_product_context` with full or patch)."
|
||||
- "Log new custom context, including ProjectGlossary terms (use `log_custom_data`)."
|
||||
- "Identify and log new relationships between items (use `link_conport_items`)."
|
||||
- "If many items of the same type were discussed, consider `batch_log_items`."
|
||||
- "After updates, consider a brief `get_recent_activity_summary` to confirm and refresh understanding."
|
||||
post_sync_actions:
|
||||
- "Inform user: ConPort synchronized with session info."
|
||||
- "Resume previous task or await new instructions."
|
||||
|
||||
dynamic_context_retrieval_for_rag:
|
||||
description: |
|
||||
Guidance for dynamically retrieving and assembling context from ConPort to answer user queries or perform tasks,
|
||||
enhancing Retrieval Augmented Generation (RAG) capabilities.
|
||||
trigger: "When the AI needs to answer a specific question, perform a task requiring detailed project knowledge, or generate content based on ConPort data."
|
||||
goal: "To construct a concise, highly relevant context set for the LLM, improving the accuracy and relevance of its responses."
|
||||
steps:
|
||||
- step: 1
|
||||
action: "Analyze User Query/Task"
|
||||
details: "Deconstruct the user's request to identify key entities, concepts, keywords, and the specific type of information needed from ConPort."
|
||||
- step: 2
|
||||
action: "Prioritized Retrieval Strategy"
|
||||
details: |
|
||||
Based on the analysis, select the most appropriate ConPort tools:
|
||||
- **Targeted FTS:** Use `search_decisions_fts`, `search_custom_data_value_fts`, `search_project_glossary_fts` for keyword-based searches if specific terms are evident.
|
||||
- **Specific Item Retrieval:** Use `get_custom_data` (if category/key known), `get_decisions` (by ID or for recent items), `get_system_patterns`, `get_progress` if the query points to specific item types or IDs.
|
||||
- **(Future):** Prioritize semantic search tools once available for conceptual queries.
|
||||
- **Broad Context (Fallback):** Use `get_product_context` or `get_active_context` as a fallback if targeted retrieval yields little, but be mindful of their size.
|
||||
- step: 3
|
||||
action: "Retrieve Initial Set"
|
||||
details: "Execute the chosen tool(s) to retrieve an initial, small set (e.g., top 3-5) of the most relevant items or data snippets."
|
||||
- step: 4
|
||||
action: "Contextual Expansion (Optional)"
|
||||
details: "For the most promising items from Step 3, consider using `get_linked_items` to fetch directly related items (1-hop). This can provide crucial context or disambiguation. Use judiciously to avoid excessive data."
|
||||
- step: 5
|
||||
action: "Synthesize and Filter"
|
||||
details: |
|
||||
Review the retrieved information (initial set + expanded context).
|
||||
- **Filter:** Discard irrelevant items or parts of items.
|
||||
- **Synthesize/Summarize:** If multiple relevant pieces of information are found, synthesize them into a concise summary that directly addresses the query/task. Extract only the most pertinent sentences or facts.
|
||||
- step: 6
|
||||
action: "Assemble Prompt Context"
|
||||
details: |
|
||||
Construct the context portion of the LLM prompt using the filtered and synthesized information.
|
||||
- **Clarity:** Clearly delineate this retrieved context from the user's query or other parts of the prompt.
|
||||
- **Attribution (Optional but Recommended):** If possible, briefly note the source of the information (e.g., "From Decision D-42:", "According to System Pattern SP-5:").
|
||||
- **Brevity:** Strive for relevance and conciseness. Avoid including large, unprocessed chunks of data unless absolutely necessary and directly requested.
|
||||
general_principles:
|
||||
- "Prefer targeted retrieval over broad context dumps."
|
||||
- "Iterate if initial retrieval is insufficient: try different keywords or tools."
|
||||
- "Balance context richness with prompt token limits."
|
||||
|
||||
proactive_knowledge_graph_linking:
|
||||
description: |
|
||||
Guidance for the AI to proactively identify and suggest the creation of links between ConPort items,
|
||||
enriching the project's knowledge graph based on conversational context.
|
||||
trigger: "During ongoing conversation, when the AI observes potential relationships (e.g., causal, implementational, clarifying) between two or more discussed ConPort items or concepts that are likely represented as ConPort items."
|
||||
goal: "To actively build and maintain a rich, interconnected knowledge graph within ConPort by capturing relationships that might otherwise be missed."
|
||||
steps:
|
||||
- step: 1
|
||||
action: "Monitor Conversational Context"
|
||||
details: "Continuously analyze the user's statements and the flow of discussion for mentions of ConPort items (explicitly by ID, or implicitly by well-known names/summaries) and the relationships being described or implied between them."
|
||||
- step: 2
|
||||
action: "Identify Potential Links"
|
||||
details: |
|
||||
Look for patterns such as:
|
||||
- User states "Decision X led to us doing Y (which is Progress item P-3)."
|
||||
- User discusses how System Pattern SP-2 helps address a concern noted in Decision D-5.
|
||||
- User outlines a task (Progress P-10) that implements a specific feature detailed in a `custom_data` spec (CD-Spec-FeatureX).
|
||||
- step: 3
|
||||
action: "Formulate and Propose Link Suggestion"
|
||||
details: |
|
||||
If a potential link is identified:
|
||||
- Clearly state the items involved (e.g., "Decision D-5", "System Pattern SP-2").
|
||||
- Describe the perceived relationship (e.g., "It seems SP-2 addresses a concern in D-5.").
|
||||
- Propose creating a link using `ask_followup_question`.
|
||||
- Example Question: "I noticed we're discussing Decision D-5 and System Pattern SP-2. It sounds like SP-2 might 'address_concern_in' D-5. Would you like me to create this link in ConPort? You can also suggest a different relationship type."
|
||||
- Suggested Answers:
|
||||
- "Yes, link them with 'addresses_concern_in'."
|
||||
- "Yes, but use relationship type: [user types here]."
|
||||
- "No, don't link them now."
|
||||
- Offer common relationship types as examples if needed: 'implements', 'clarifies', 'related_to', 'depends_on', 'blocks', 'resolves', 'derived_from'.
|
||||
- step: 4
|
||||
action: "Gather Details and Execute Linking"
|
||||
details: |
|
||||
If the user confirms:
|
||||
- Ensure you have the correct source item type, source item ID, target item type, target item ID, and the agreed-upon relationship type.
|
||||
- Ask for an optional brief description for the link if the relationship isn't obvious.
|
||||
- Invoke the `link_conport_items` tool.
|
||||
- step: 5
|
||||
action: "Confirm Outcome"
|
||||
details: "Inform the user of the success or failure of the `link_conport_items` tool call."
|
||||
general_principles:
|
||||
- "Be helpful, not intrusive. If the user declines a suggestion, accept and move on."
|
||||
- "Prioritize clear, strong relationships over tenuous ones."
|
||||
- "This strategy complements the general `proactive_logging_cue` by providing specific guidance for link creation."
|
||||
@@ -4,48 +4,44 @@ from django.contrib.sites.models import Site
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = 'Set up social authentication providers for development'
|
||||
help = "Set up social authentication providers for development"
|
||||
|
||||
def handle(self, *args, **options):
|
||||
# Get the current site
|
||||
site = Site.objects.get_current()
|
||||
self.stdout.write(f'Setting up social providers for site: {site}')
|
||||
self.stdout.write(f"Setting up social providers for site: {site}")
|
||||
|
||||
# Clear existing social apps to avoid duplicates
|
||||
deleted_count = SocialApp.objects.all().delete()[0]
|
||||
self.stdout.write(f'Cleared {deleted_count} existing social apps')
|
||||
self.stdout.write(f"Cleared {deleted_count} existing social apps")
|
||||
|
||||
# Create Google social app
|
||||
google_app = SocialApp.objects.create(
|
||||
provider='google',
|
||||
name='Google',
|
||||
client_id='demo-google-client-id.apps.googleusercontent.com',
|
||||
secret='demo-google-client-secret',
|
||||
key='',
|
||||
provider="google",
|
||||
name="Google",
|
||||
client_id="demo-google-client-id.apps.googleusercontent.com",
|
||||
secret="demo-google-client-secret",
|
||||
key="",
|
||||
)
|
||||
google_app.sites.add(site)
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS('✅ Created Google social app')
|
||||
)
|
||||
self.stdout.write(self.style.SUCCESS("✅ Created Google social app"))
|
||||
|
||||
# Create Discord social app
|
||||
discord_app = SocialApp.objects.create(
|
||||
provider='discord',
|
||||
name='Discord',
|
||||
client_id='demo-discord-client-id',
|
||||
secret='demo-discord-client-secret',
|
||||
key='',
|
||||
provider="discord",
|
||||
name="Discord",
|
||||
client_id="demo-discord-client-id",
|
||||
secret="demo-discord-client-secret",
|
||||
key="",
|
||||
)
|
||||
discord_app.sites.add(site)
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS('✅ Created Discord social app')
|
||||
)
|
||||
self.stdout.write(self.style.SUCCESS("✅ Created Discord social app"))
|
||||
|
||||
# List all social apps
|
||||
self.stdout.write('\nConfigured social apps:')
|
||||
self.stdout.write("\nConfigured social apps:")
|
||||
for app in SocialApp.objects.all():
|
||||
self.stdout.write(f'- {app.name} ({app.provider}): {app.client_id}')
|
||||
self.stdout.write(f"- {app.name} ({app.provider}): {app.client_id}")
|
||||
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(f'\nTotal social apps: {SocialApp.objects.count()}')
|
||||
self.style.SUCCESS(f"\nTotal social apps: {SocialApp.objects.count()}")
|
||||
)
|
||||
|
||||
@@ -17,19 +17,26 @@ class UserSerializer(serializers.ModelSerializer):
|
||||
"""
|
||||
User serializer for API responses
|
||||
"""
|
||||
|
||||
avatar_url = serializers.SerializerMethodField()
|
||||
|
||||
class Meta:
|
||||
model = User
|
||||
fields = [
|
||||
'id', 'username', 'email', 'first_name', 'last_name',
|
||||
'date_joined', 'is_active', 'avatar_url'
|
||||
"id",
|
||||
"username",
|
||||
"email",
|
||||
"first_name",
|
||||
"last_name",
|
||||
"date_joined",
|
||||
"is_active",
|
||||
"avatar_url",
|
||||
]
|
||||
read_only_fields = ['id', 'date_joined', 'is_active']
|
||||
read_only_fields = ["id", "date_joined", "is_active"]
|
||||
|
||||
def get_avatar_url(self, obj):
|
||||
"""Get user avatar URL"""
|
||||
if hasattr(obj, 'profile') and obj.profile.avatar:
|
||||
if hasattr(obj, "profile") and obj.profile.avatar:
|
||||
return obj.profile.avatar.url
|
||||
return None
|
||||
|
||||
@@ -38,59 +45,57 @@ class LoginSerializer(serializers.Serializer):
|
||||
"""
|
||||
Serializer for user login
|
||||
"""
|
||||
|
||||
username = serializers.CharField(
|
||||
max_length=254,
|
||||
help_text="Username or email address"
|
||||
max_length=254, help_text="Username or email address"
|
||||
)
|
||||
password = serializers.CharField(
|
||||
max_length=128,
|
||||
style={'input_type': 'password'},
|
||||
trim_whitespace=False
|
||||
max_length=128, style={"input_type": "password"}, trim_whitespace=False
|
||||
)
|
||||
|
||||
def validate(self, attrs):
|
||||
username = attrs.get('username')
|
||||
password = attrs.get('password')
|
||||
username = attrs.get("username")
|
||||
password = attrs.get("password")
|
||||
|
||||
if username and password:
|
||||
return attrs
|
||||
|
||||
raise serializers.ValidationError(
|
||||
'Must include username/email and password.'
|
||||
)
|
||||
raise serializers.ValidationError("Must include username/email and password.")
|
||||
|
||||
|
||||
class SignupSerializer(serializers.ModelSerializer):
|
||||
"""
|
||||
Serializer for user registration
|
||||
"""
|
||||
|
||||
password = serializers.CharField(
|
||||
write_only=True,
|
||||
validators=[validate_password],
|
||||
style={'input_type': 'password'}
|
||||
style={"input_type": "password"},
|
||||
)
|
||||
password_confirm = serializers.CharField(
|
||||
write_only=True,
|
||||
style={'input_type': 'password'}
|
||||
write_only=True, style={"input_type": "password"}
|
||||
)
|
||||
|
||||
class Meta:
|
||||
model = User
|
||||
fields = [
|
||||
'username', 'email', 'first_name', 'last_name',
|
||||
'password', 'password_confirm'
|
||||
"username",
|
||||
"email",
|
||||
"first_name",
|
||||
"last_name",
|
||||
"password",
|
||||
"password_confirm",
|
||||
]
|
||||
extra_kwargs = {
|
||||
'password': {'write_only': True},
|
||||
'email': {'required': True},
|
||||
"password": {"write_only": True},
|
||||
"email": {"required": True},
|
||||
}
|
||||
|
||||
def validate_email(self, value):
|
||||
"""Validate email is unique"""
|
||||
if UserModel.objects.filter(email=value).exists():
|
||||
raise serializers.ValidationError(
|
||||
"A user with this email already exists."
|
||||
)
|
||||
raise serializers.ValidationError("A user with this email already exists.")
|
||||
return value
|
||||
|
||||
def validate_username(self, value):
|
||||
@@ -103,24 +108,22 @@ class SignupSerializer(serializers.ModelSerializer):
|
||||
|
||||
def validate(self, attrs):
|
||||
"""Validate passwords match"""
|
||||
password = attrs.get('password')
|
||||
password_confirm = attrs.get('password_confirm')
|
||||
password = attrs.get("password")
|
||||
password_confirm = attrs.get("password_confirm")
|
||||
|
||||
if password != password_confirm:
|
||||
raise serializers.ValidationError({
|
||||
'password_confirm': 'Passwords do not match.'
|
||||
})
|
||||
raise serializers.ValidationError(
|
||||
{"password_confirm": "Passwords do not match."}
|
||||
)
|
||||
|
||||
return attrs
|
||||
|
||||
def create(self, validated_data):
|
||||
"""Create user with validated data"""
|
||||
validated_data.pop('password_confirm', None)
|
||||
password = validated_data.pop('password')
|
||||
validated_data.pop("password_confirm", None)
|
||||
password = validated_data.pop("password")
|
||||
|
||||
user = UserModel.objects.create(
|
||||
**validated_data
|
||||
)
|
||||
user = UserModel.objects.create(**validated_data)
|
||||
user.set_password(password)
|
||||
user.save()
|
||||
|
||||
@@ -131,6 +134,7 @@ class PasswordResetSerializer(serializers.Serializer):
|
||||
"""
|
||||
Serializer for password reset request
|
||||
"""
|
||||
|
||||
email = serializers.EmailField()
|
||||
|
||||
def validate_email(self, value):
|
||||
@@ -145,37 +149,36 @@ class PasswordResetSerializer(serializers.Serializer):
|
||||
|
||||
def save(self, **kwargs):
|
||||
"""Send password reset email if user exists"""
|
||||
if hasattr(self, 'user'):
|
||||
if hasattr(self, "user"):
|
||||
# Create password reset token
|
||||
token = get_random_string(64)
|
||||
PasswordReset.objects.update_or_create(
|
||||
user=self.user,
|
||||
defaults={
|
||||
'token': token,
|
||||
'expires_at': timezone.now() + timedelta(hours=24),
|
||||
'used': False
|
||||
}
|
||||
"token": token,
|
||||
"expires_at": timezone.now() + timedelta(hours=24),
|
||||
"used": False,
|
||||
},
|
||||
)
|
||||
|
||||
# Send reset email
|
||||
request = self.context.get('request')
|
||||
request = self.context.get("request")
|
||||
if request:
|
||||
site = get_current_site(request)
|
||||
reset_url = f"{request.scheme}://{site.domain}/reset-password/{token}/"
|
||||
|
||||
context = {
|
||||
'user': self.user,
|
||||
'reset_url': reset_url,
|
||||
'site_name': site.name,
|
||||
"user": self.user,
|
||||
"reset_url": reset_url,
|
||||
"site_name": site.name,
|
||||
}
|
||||
|
||||
email_html = render_to_string(
|
||||
'accounts/email/password_reset.html',
|
||||
context
|
||||
"accounts/email/password_reset.html", context
|
||||
)
|
||||
|
||||
EmailService.send_email(
|
||||
to=getattr(self.user, 'email', None),
|
||||
to=getattr(self.user, "email", None),
|
||||
subject="Reset your password",
|
||||
text=f"Click the link to reset your password: {reset_url}",
|
||||
site=site,
|
||||
@@ -187,49 +190,45 @@ class PasswordChangeSerializer(serializers.Serializer):
|
||||
"""
|
||||
Serializer for password change
|
||||
"""
|
||||
|
||||
old_password = serializers.CharField(
|
||||
max_length=128,
|
||||
style={'input_type': 'password'}
|
||||
max_length=128, style={"input_type": "password"}
|
||||
)
|
||||
new_password = serializers.CharField(
|
||||
max_length=128,
|
||||
validators=[validate_password],
|
||||
style={'input_type': 'password'}
|
||||
max_length=128, validators=[validate_password], style={"input_type": "password"}
|
||||
)
|
||||
new_password_confirm = serializers.CharField(
|
||||
max_length=128,
|
||||
style={'input_type': 'password'}
|
||||
max_length=128, style={"input_type": "password"}
|
||||
)
|
||||
|
||||
def validate_old_password(self, value):
|
||||
"""Validate old password is correct"""
|
||||
user = self.context['request'].user
|
||||
user = self.context["request"].user
|
||||
if not user.check_password(value):
|
||||
raise serializers.ValidationError(
|
||||
'Old password is incorrect.'
|
||||
)
|
||||
raise serializers.ValidationError("Old password is incorrect.")
|
||||
return value
|
||||
|
||||
def validate(self, attrs):
|
||||
"""Validate new passwords match"""
|
||||
new_password = attrs.get('new_password')
|
||||
new_password_confirm = attrs.get('new_password_confirm')
|
||||
new_password = attrs.get("new_password")
|
||||
new_password_confirm = attrs.get("new_password_confirm")
|
||||
|
||||
if new_password != new_password_confirm:
|
||||
raise serializers.ValidationError({
|
||||
'new_password_confirm': 'New passwords do not match.'
|
||||
})
|
||||
raise serializers.ValidationError(
|
||||
{"new_password_confirm": "New passwords do not match."}
|
||||
)
|
||||
|
||||
return attrs
|
||||
|
||||
def save(self, **kwargs):
|
||||
"""Change user password"""
|
||||
user = self.context['request'].user
|
||||
new_password = self.initial_data.get(
|
||||
'new_password') if self.initial_data else None
|
||||
user = self.context["request"].user
|
||||
new_password = (
|
||||
self.initial_data.get("new_password") if self.initial_data else None
|
||||
)
|
||||
|
||||
if new_password is None:
|
||||
raise serializers.ValidationError('New password is required.')
|
||||
raise serializers.ValidationError("New password is required.")
|
||||
|
||||
user.set_password(new_password)
|
||||
user.save()
|
||||
@@ -241,6 +240,7 @@ class SocialProviderSerializer(serializers.Serializer):
|
||||
"""
|
||||
Serializer for social authentication providers
|
||||
"""
|
||||
|
||||
id = serializers.CharField()
|
||||
name = serializers.CharField()
|
||||
login_url = serializers.URLField()
|
||||
|
||||
252
backend/apps/api/v1/serializers_rankings.py
Normal file
252
backend/apps/api/v1/serializers_rankings.py
Normal file
@@ -0,0 +1,252 @@
|
||||
"""
|
||||
API serializers for the ride ranking system.
|
||||
"""
|
||||
|
||||
from rest_framework import serializers
|
||||
from drf_spectacular.utils import extend_schema_serializer, OpenApiExample
|
||||
|
||||
from apps.rides.models import RideRanking, RidePairComparison, RankingSnapshot
|
||||
|
||||
|
||||
@extend_schema_serializer(
|
||||
examples=[
|
||||
OpenApiExample(
|
||||
"Ride Ranking Example",
|
||||
summary="Example ranking response",
|
||||
description="A ride ranking with all metrics",
|
||||
value={
|
||||
"id": 1,
|
||||
"rank": 1,
|
||||
"ride": {
|
||||
"id": 123,
|
||||
"name": "Steel Vengeance",
|
||||
"slug": "steel-vengeance",
|
||||
"park": {"id": 45, "name": "Cedar Point", "slug": "cedar-point"},
|
||||
"category": "RC",
|
||||
},
|
||||
"wins": 523,
|
||||
"losses": 87,
|
||||
"ties": 45,
|
||||
"winning_percentage": 0.8234,
|
||||
"mutual_riders_count": 1250,
|
||||
"comparison_count": 655,
|
||||
"average_rating": 9.2,
|
||||
"last_calculated": "2024-01-15T02:00:00Z",
|
||||
"rank_change": 2,
|
||||
"previous_rank": 3,
|
||||
},
|
||||
)
|
||||
]
|
||||
)
|
||||
class RideRankingSerializer(serializers.ModelSerializer):
|
||||
"""Serializer for ride rankings."""
|
||||
|
||||
ride = serializers.SerializerMethodField()
|
||||
rank_change = serializers.SerializerMethodField()
|
||||
previous_rank = serializers.SerializerMethodField()
|
||||
|
||||
class Meta:
|
||||
model = RideRanking
|
||||
fields = [
|
||||
"id",
|
||||
"rank",
|
||||
"ride",
|
||||
"wins",
|
||||
"losses",
|
||||
"ties",
|
||||
"winning_percentage",
|
||||
"mutual_riders_count",
|
||||
"comparison_count",
|
||||
"average_rating",
|
||||
"last_calculated",
|
||||
"rank_change",
|
||||
"previous_rank",
|
||||
]
|
||||
|
||||
def get_ride(self, obj):
|
||||
"""Get ride details."""
|
||||
return {
|
||||
"id": obj.ride.id,
|
||||
"name": obj.ride.name,
|
||||
"slug": obj.ride.slug,
|
||||
"park": {
|
||||
"id": obj.ride.park.id,
|
||||
"name": obj.ride.park.name,
|
||||
"slug": obj.ride.park.slug,
|
||||
},
|
||||
"category": obj.ride.category,
|
||||
}
|
||||
|
||||
def get_rank_change(self, obj):
|
||||
"""Calculate rank change from previous snapshot."""
|
||||
latest_snapshots = RankingSnapshot.objects.filter(ride=obj.ride).order_by(
|
||||
"-snapshot_date"
|
||||
)[:2]
|
||||
|
||||
if len(latest_snapshots) >= 2:
|
||||
return latest_snapshots[0].rank - latest_snapshots[1].rank
|
||||
return None
|
||||
|
||||
def get_previous_rank(self, obj):
|
||||
"""Get previous rank."""
|
||||
latest_snapshots = RankingSnapshot.objects.filter(ride=obj.ride).order_by(
|
||||
"-snapshot_date"
|
||||
)[:2]
|
||||
|
||||
if len(latest_snapshots) >= 2:
|
||||
return latest_snapshots[1].rank
|
||||
return None
|
||||
|
||||
|
||||
class RideRankingDetailSerializer(serializers.ModelSerializer):
|
||||
"""Detailed serializer for a specific ride's ranking."""
|
||||
|
||||
ride = serializers.SerializerMethodField()
|
||||
head_to_head_comparisons = serializers.SerializerMethodField()
|
||||
ranking_history = serializers.SerializerMethodField()
|
||||
|
||||
class Meta:
|
||||
model = RideRanking
|
||||
fields = [
|
||||
"id",
|
||||
"rank",
|
||||
"ride",
|
||||
"wins",
|
||||
"losses",
|
||||
"ties",
|
||||
"winning_percentage",
|
||||
"mutual_riders_count",
|
||||
"comparison_count",
|
||||
"average_rating",
|
||||
"last_calculated",
|
||||
"calculation_version",
|
||||
"head_to_head_comparisons",
|
||||
"ranking_history",
|
||||
]
|
||||
|
||||
def get_ride(self, obj):
|
||||
"""Get detailed ride information."""
|
||||
ride = obj.ride
|
||||
return {
|
||||
"id": ride.id,
|
||||
"name": ride.name,
|
||||
"slug": ride.slug,
|
||||
"description": ride.description,
|
||||
"park": {
|
||||
"id": ride.park.id,
|
||||
"name": ride.park.name,
|
||||
"slug": ride.park.slug,
|
||||
"location": {
|
||||
"city": (
|
||||
ride.park.location.city
|
||||
if hasattr(ride.park, "location")
|
||||
else None
|
||||
),
|
||||
"state": (
|
||||
ride.park.location.state
|
||||
if hasattr(ride.park, "location")
|
||||
else None
|
||||
),
|
||||
"country": (
|
||||
ride.park.location.country
|
||||
if hasattr(ride.park, "location")
|
||||
else None
|
||||
),
|
||||
},
|
||||
},
|
||||
"category": ride.category,
|
||||
"manufacturer": (
|
||||
{"id": ride.manufacturer.id, "name": ride.manufacturer.name}
|
||||
if ride.manufacturer
|
||||
else None
|
||||
),
|
||||
"opening_date": ride.opening_date,
|
||||
"status": ride.status,
|
||||
}
|
||||
|
||||
def get_head_to_head_comparisons(self, obj):
|
||||
"""Get top head-to-head comparisons."""
|
||||
from django.db.models import Q
|
||||
|
||||
comparisons = (
|
||||
RidePairComparison.objects.filter(Q(ride_a=obj.ride) | Q(ride_b=obj.ride))
|
||||
.select_related("ride_a", "ride_b")
|
||||
.order_by("-mutual_riders_count")[:10]
|
||||
)
|
||||
|
||||
results = []
|
||||
for comp in comparisons:
|
||||
if comp.ride_a == obj.ride:
|
||||
opponent = comp.ride_b
|
||||
wins = comp.ride_a_wins
|
||||
losses = comp.ride_b_wins
|
||||
else:
|
||||
opponent = comp.ride_a
|
||||
wins = comp.ride_b_wins
|
||||
losses = comp.ride_a_wins
|
||||
|
||||
result = "win" if wins > losses else "loss" if losses > wins else "tie"
|
||||
|
||||
results.append(
|
||||
{
|
||||
"opponent": {
|
||||
"id": opponent.id,
|
||||
"name": opponent.name,
|
||||
"slug": opponent.slug,
|
||||
"park": opponent.park.name,
|
||||
},
|
||||
"wins": wins,
|
||||
"losses": losses,
|
||||
"ties": comp.ties,
|
||||
"result": result,
|
||||
"mutual_riders": comp.mutual_riders_count,
|
||||
}
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
def get_ranking_history(self, obj):
|
||||
"""Get recent ranking history."""
|
||||
history = RankingSnapshot.objects.filter(ride=obj.ride).order_by(
|
||||
"-snapshot_date"
|
||||
)[:30]
|
||||
|
||||
return [
|
||||
{
|
||||
"date": snapshot.snapshot_date,
|
||||
"rank": snapshot.rank,
|
||||
"winning_percentage": float(snapshot.winning_percentage),
|
||||
}
|
||||
for snapshot in history
|
||||
]
|
||||
|
||||
|
||||
class RankingSnapshotSerializer(serializers.ModelSerializer):
|
||||
"""Serializer for ranking history snapshots."""
|
||||
|
||||
ride_name = serializers.CharField(source="ride.name", read_only=True)
|
||||
park_name = serializers.CharField(source="ride.park.name", read_only=True)
|
||||
|
||||
class Meta:
|
||||
model = RankingSnapshot
|
||||
fields = [
|
||||
"id",
|
||||
"ride",
|
||||
"ride_name",
|
||||
"park_name",
|
||||
"rank",
|
||||
"winning_percentage",
|
||||
"snapshot_date",
|
||||
]
|
||||
|
||||
|
||||
class RankingStatsSerializer(serializers.Serializer):
|
||||
"""Serializer for ranking system statistics."""
|
||||
|
||||
total_ranked_rides = serializers.IntegerField()
|
||||
total_comparisons = serializers.IntegerField()
|
||||
last_calculation_time = serializers.DateTimeField()
|
||||
calculation_duration = serializers.FloatField()
|
||||
top_rated_ride = serializers.DictField()
|
||||
most_compared_ride = serializers.DictField()
|
||||
biggest_rank_change = serializers.DictField()
|
||||
@@ -44,8 +44,14 @@ from .viewsets import (
|
||||
UserProfileViewSet,
|
||||
TopListViewSet,
|
||||
TopListItemViewSet,
|
||||
# Trending system views
|
||||
TrendingAPIView,
|
||||
NewContentAPIView,
|
||||
)
|
||||
|
||||
# Import ranking viewsets
|
||||
from .viewsets_rankings import RideRankingViewSet, TriggerRankingCalculationView
|
||||
|
||||
# Create the main API router
|
||||
router = DefaultRouter()
|
||||
|
||||
@@ -53,7 +59,7 @@ router = DefaultRouter()
|
||||
|
||||
# Core models
|
||||
router.register(r"parks", ParkViewSet, basename="park")
|
||||
router.register(r"rides", RideViewSet, basename="ride")
|
||||
# Note: rides registered below with list-only actions to enforce nested-only detail access
|
||||
|
||||
# Park-related models
|
||||
router.register(r"park-areas", ParkAreaViewSet, basename="park-area")
|
||||
@@ -79,6 +85,9 @@ router.register(r"top-list-items", TopListItemViewSet, basename="top-list-item")
|
||||
router.register(r"ref/parks", ParkReadOnlyViewSet, basename="park-ref")
|
||||
router.register(r"ref/rides", RideReadOnlyViewSet, basename="ride-ref")
|
||||
|
||||
# Register ranking endpoints
|
||||
router.register(r"rankings", RideRankingViewSet, basename="ranking")
|
||||
|
||||
app_name = "api_v1"
|
||||
|
||||
urlpatterns = [
|
||||
@@ -137,6 +146,39 @@ urlpatterns = [
|
||||
RideHistoryViewSet.as_view({"get": "retrieve"}),
|
||||
name="ride-history-detail",
|
||||
),
|
||||
# Nested park-scoped ride endpoints
|
||||
path(
|
||||
"parks/<str:park_slug>/rides/",
|
||||
RideViewSet.as_view({"get": "list", "post": "create"}),
|
||||
name="park-rides-list",
|
||||
),
|
||||
path(
|
||||
"parks/<str:park_slug>/rides/<str:ride_slug>/",
|
||||
RideViewSet.as_view(
|
||||
{
|
||||
"get": "retrieve",
|
||||
"put": "update",
|
||||
"patch": "partial_update",
|
||||
"delete": "destroy",
|
||||
}
|
||||
),
|
||||
name="park-rides-detail",
|
||||
),
|
||||
# Trending system endpoints
|
||||
path("trending/content/", TrendingAPIView.as_view(), name="trending"),
|
||||
path("trending/new/", NewContentAPIView.as_view(), name="new-content"),
|
||||
# Ranking system endpoints
|
||||
path(
|
||||
"rankings/calculate/",
|
||||
TriggerRankingCalculationView.as_view(),
|
||||
name="trigger-ranking-calculation",
|
||||
),
|
||||
# Global rides list endpoint (detail access only via nested park routes)
|
||||
path(
|
||||
"rides/",
|
||||
RideViewSet.as_view({"get": "list"}),
|
||||
name="ride-list",
|
||||
),
|
||||
# Include all router-generated URLs
|
||||
path("", include(router.urls)),
|
||||
]
|
||||
|
||||
@@ -28,6 +28,7 @@ from django.core.exceptions import ValidationError
|
||||
from django.utils import timezone
|
||||
from django.conf import settings
|
||||
from django.shortcuts import get_object_or_404
|
||||
from django.http import Http404
|
||||
from allauth.socialaccount.models import SocialApp
|
||||
from allauth.socialaccount import providers
|
||||
from health_check.views import MainView
|
||||
@@ -669,12 +670,20 @@ class RideViewSet(ModelViewSet):
|
||||
def get_queryset(self): # type: ignore[override]
|
||||
"""Get optimized queryset based on action."""
|
||||
if self.action == "list":
|
||||
# Parse filter parameters for list view
|
||||
# CRITICAL FIX: Check if this is a nested endpoint first
|
||||
park_slug = self.kwargs.get("park_slug")
|
||||
if park_slug:
|
||||
# For nested endpoints, use the dedicated park selector
|
||||
from apps.rides.selectors import rides_in_park
|
||||
return rides_in_park(park_slug=park_slug)
|
||||
|
||||
# For global endpoints, parse filter parameters and use general selector
|
||||
filter_serializer = RideFilterInputSerializer(
|
||||
data=self.request.query_params # type: ignore[attr-defined]
|
||||
)
|
||||
filter_serializer.is_valid(raise_exception=True)
|
||||
filters = filter_serializer.validated_data
|
||||
|
||||
return ride_list_for_display(filters=filters) # type: ignore[arg-type]
|
||||
|
||||
# For other actions, return base queryset
|
||||
@@ -690,7 +699,10 @@ class RideViewSet(ModelViewSet):
|
||||
ride_slug = self.kwargs.get("slug") or self.kwargs.get("ride_slug")
|
||||
|
||||
if park_slug and ride_slug:
|
||||
try:
|
||||
return ride_detail_optimized(slug=ride_slug, park_slug=park_slug)
|
||||
except Ride.DoesNotExist:
|
||||
raise Http404("Ride not found")
|
||||
elif ride_slug:
|
||||
# For rides accessed directly by slug, we'll use the first approach
|
||||
# and let the 404 handling work naturally
|
||||
@@ -1748,21 +1760,43 @@ class LoginAPIView(TurnstileMixin, APIView):
|
||||
email_or_username = serializer.validated_data["username"]
|
||||
password = serializer.validated_data["password"] # type: ignore[index]
|
||||
|
||||
# Try to authenticate with email first, then username
|
||||
# Optimized user lookup: single query using Q objects
|
||||
from django.db.models import Q
|
||||
from django.contrib.auth import get_user_model
|
||||
|
||||
User = get_user_model()
|
||||
user = None
|
||||
if "@" in email_or_username:
|
||||
|
||||
# Single query to find user by email OR username
|
||||
try:
|
||||
user_obj = UserModel.objects.get(email=email_or_username)
|
||||
if "@" in email_or_username:
|
||||
# Email-like input: try email first, then username as fallback
|
||||
user_obj = (
|
||||
User.objects.select_related()
|
||||
.filter(
|
||||
Q(email=email_or_username) | Q(username=email_or_username)
|
||||
)
|
||||
.first()
|
||||
)
|
||||
else:
|
||||
# Username-like input: try username first, then email as fallback
|
||||
user_obj = (
|
||||
User.objects.select_related()
|
||||
.filter(
|
||||
Q(username=email_or_username) | Q(email=email_or_username)
|
||||
)
|
||||
.first()
|
||||
)
|
||||
|
||||
if user_obj:
|
||||
user = authenticate(
|
||||
# type: ignore[attr-defined]
|
||||
request._request,
|
||||
username=user_obj.username,
|
||||
password=password,
|
||||
)
|
||||
except UserModel.DoesNotExist:
|
||||
pass
|
||||
|
||||
if not user:
|
||||
except Exception:
|
||||
# Fallback to original behavior
|
||||
user = authenticate(
|
||||
# type: ignore[attr-defined]
|
||||
request._request,
|
||||
@@ -1773,6 +1807,7 @@ class LoginAPIView(TurnstileMixin, APIView):
|
||||
if user:
|
||||
if user.is_active:
|
||||
login(request._request, user) # type: ignore[attr-defined]
|
||||
# Optimized token creation - get_or_create is atomic
|
||||
token, created = Token.objects.get_or_create(user=user)
|
||||
|
||||
response_serializer = LoginOutputSerializer(
|
||||
@@ -1981,48 +2016,56 @@ class SocialProvidersAPIView(APIView):
|
||||
serializer_class = SocialProviderOutputSerializer
|
||||
|
||||
def get(self, request: Request) -> Response:
|
||||
from django.core.cache import cache
|
||||
from django.contrib.sites.shortcuts import get_current_site
|
||||
|
||||
site = get_current_site(request._request) # type: ignore[attr-defined]
|
||||
|
||||
# Cache key based on site and request host
|
||||
cache_key = (
|
||||
f"social_providers:{getattr(site, 'id', site.pk)}:{request.get_host()}"
|
||||
)
|
||||
|
||||
# Try to get from cache first (cache for 15 minutes)
|
||||
cached_providers = cache.get(cache_key)
|
||||
if cached_providers is not None:
|
||||
return Response(cached_providers)
|
||||
|
||||
providers_list = []
|
||||
|
||||
# Get all configured social apps for the current site
|
||||
social_apps = SocialApp.objects.filter(sites=site)
|
||||
# Optimized query: filter by site and order by provider name
|
||||
social_apps = SocialApp.objects.filter(sites=site).order_by("provider")
|
||||
|
||||
for social_app in social_apps:
|
||||
try:
|
||||
# Get provider class from providers module
|
||||
provider_module = getattr(providers, social_app.provider, None)
|
||||
if provider_module and hasattr(provider_module, "provider"):
|
||||
provider_class = provider_module.provider
|
||||
provider_instance = provider_class(request)
|
||||
# Simplified provider name resolution - avoid expensive provider class loading
|
||||
provider_name = social_app.name or social_app.provider.title()
|
||||
|
||||
# Build auth URL efficiently
|
||||
auth_url = request.build_absolute_uri(
|
||||
f"/accounts/{social_app.provider}/login/"
|
||||
)
|
||||
|
||||
providers_list.append(
|
||||
{
|
||||
"id": social_app.provider,
|
||||
"name": provider_instance.name,
|
||||
"authUrl": auth_url,
|
||||
}
|
||||
)
|
||||
else:
|
||||
# Fallback: use provider id as name
|
||||
auth_url = request.build_absolute_uri(
|
||||
f"/accounts/{social_app.provider}/login/"
|
||||
)
|
||||
providers_list.append(
|
||||
{
|
||||
"id": social_app.provider,
|
||||
"name": social_app.provider.title(),
|
||||
"name": provider_name,
|
||||
"authUrl": auth_url,
|
||||
}
|
||||
)
|
||||
|
||||
except Exception:
|
||||
# Skip if provider can't be loaded
|
||||
continue
|
||||
|
||||
# Serialize and cache the result
|
||||
serializer = SocialProviderOutputSerializer(providers_list, many=True)
|
||||
return Response(serializer.data)
|
||||
response_data = serializer.data
|
||||
|
||||
# Cache for 15 minutes (900 seconds)
|
||||
cache.set(cache_key, response_data, 900)
|
||||
|
||||
return Response(response_data)
|
||||
|
||||
|
||||
@extend_schema_view(
|
||||
@@ -2908,3 +2951,192 @@ class UnifiedHistoryViewSet(ReadOnlyModelViewSet):
|
||||
|
||||
serializer = UnifiedHistoryTimelineSerializer(timeline_data)
|
||||
return Response(serializer.data)
|
||||
|
||||
|
||||
# === TRENDING VIEWSETS ===
|
||||
|
||||
|
||||
@extend_schema_view(
|
||||
list=extend_schema(
|
||||
summary="Get trending content",
|
||||
description="Retrieve trending parks and rides based on view counts, ratings, and recency.",
|
||||
parameters=[
|
||||
OpenApiParameter(
|
||||
name="limit",
|
||||
type=OpenApiTypes.INT,
|
||||
location=OpenApiParameter.QUERY,
|
||||
description="Number of trending items to return (default: 20, max: 100)",
|
||||
),
|
||||
OpenApiParameter(
|
||||
name="timeframe",
|
||||
type=OpenApiTypes.STR,
|
||||
location=OpenApiParameter.QUERY,
|
||||
description="Timeframe for trending calculation (day, week, month) - default: week",
|
||||
),
|
||||
],
|
||||
responses={200: OpenApiTypes.OBJECT},
|
||||
tags=["Trending"],
|
||||
),
|
||||
)
|
||||
class TrendingAPIView(APIView):
|
||||
"""API endpoint for trending content."""
|
||||
|
||||
permission_classes = [AllowAny]
|
||||
|
||||
def get(self, request: Request) -> Response:
|
||||
"""Get trending parks and rides."""
|
||||
from apps.core.services.trending_service import TrendingService
|
||||
|
||||
# Parse parameters
|
||||
limit = min(int(request.query_params.get("limit", 20)), 100)
|
||||
|
||||
# Get trending content
|
||||
trending_service = TrendingService()
|
||||
all_trending = trending_service.get_trending_content(limit=limit * 2)
|
||||
|
||||
# Separate by content type
|
||||
trending_rides = []
|
||||
trending_parks = []
|
||||
|
||||
for item in all_trending:
|
||||
if item.get("category") == "ride":
|
||||
trending_rides.append(item)
|
||||
elif item.get("category") == "park":
|
||||
trending_parks.append(item)
|
||||
|
||||
# Limit each category
|
||||
trending_rides = trending_rides[: limit // 3] if trending_rides else []
|
||||
trending_parks = trending_parks[: limit // 3] if trending_parks else []
|
||||
|
||||
# Create mock latest reviews (since not implemented yet)
|
||||
latest_reviews = [
|
||||
{
|
||||
"id": 1,
|
||||
"name": "Steel Vengeance Review",
|
||||
"location": "Cedar Point",
|
||||
"category": "Roller Coaster",
|
||||
"rating": 5.0,
|
||||
"rank": 1,
|
||||
"views": 1234,
|
||||
"views_change": "+45%",
|
||||
"slug": "steel-vengeance-review",
|
||||
}
|
||||
][: limit // 3]
|
||||
|
||||
# Return in expected frontend format
|
||||
response_data = {
|
||||
"trending_rides": trending_rides,
|
||||
"trending_parks": trending_parks,
|
||||
"latest_reviews": latest_reviews,
|
||||
}
|
||||
|
||||
return Response(response_data)
|
||||
|
||||
|
||||
@extend_schema_view(
|
||||
list=extend_schema(
|
||||
summary="Get new content",
|
||||
description="Retrieve recently added parks and rides.",
|
||||
parameters=[
|
||||
OpenApiParameter(
|
||||
name="limit",
|
||||
type=OpenApiTypes.INT,
|
||||
location=OpenApiParameter.QUERY,
|
||||
description="Number of new items to return (default: 20, max: 100)",
|
||||
),
|
||||
OpenApiParameter(
|
||||
name="days",
|
||||
type=OpenApiTypes.INT,
|
||||
location=OpenApiParameter.QUERY,
|
||||
description="Number of days to look back for new content (default: 30, max: 365)",
|
||||
),
|
||||
],
|
||||
responses={200: OpenApiTypes.OBJECT},
|
||||
tags=["Trending"],
|
||||
),
|
||||
)
|
||||
class NewContentAPIView(APIView):
|
||||
"""API endpoint for new content."""
|
||||
|
||||
permission_classes = [AllowAny]
|
||||
|
||||
def get(self, request: Request) -> Response:
|
||||
"""Get new parks and rides."""
|
||||
from apps.core.services.trending_service import TrendingService
|
||||
from datetime import datetime, date
|
||||
|
||||
# Parse parameters
|
||||
limit = min(int(request.query_params.get("limit", 20)), 100)
|
||||
|
||||
# Get new content with longer timeframe to get more data
|
||||
trending_service = TrendingService()
|
||||
all_new_content = trending_service.get_new_content(
|
||||
limit=limit * 2, days_back=60
|
||||
)
|
||||
|
||||
recently_added = []
|
||||
newly_opened = []
|
||||
upcoming = []
|
||||
|
||||
# Categorize items based on date
|
||||
today = date.today()
|
||||
|
||||
for item in all_new_content:
|
||||
date_added = item.get("date_added", "")
|
||||
if date_added:
|
||||
try:
|
||||
# Parse the date string
|
||||
if isinstance(date_added, str):
|
||||
item_date = datetime.fromisoformat(date_added).date()
|
||||
else:
|
||||
item_date = date_added
|
||||
|
||||
# Calculate days difference
|
||||
days_diff = (today - item_date).days
|
||||
|
||||
if days_diff <= 30: # Recently added (last 30 days)
|
||||
recently_added.append(item)
|
||||
elif days_diff <= 365: # Newly opened (last year)
|
||||
newly_opened.append(item)
|
||||
else: # Older items
|
||||
newly_opened.append(item)
|
||||
|
||||
except (ValueError, TypeError):
|
||||
# If date parsing fails, add to recently added
|
||||
recently_added.append(item)
|
||||
else:
|
||||
recently_added.append(item)
|
||||
|
||||
# Create mock upcoming items
|
||||
upcoming = [
|
||||
{
|
||||
"id": 1,
|
||||
"name": "Epic Universe",
|
||||
"location": "Universal Orlando",
|
||||
"category": "Theme Park",
|
||||
"date_added": "Opening 2025",
|
||||
"slug": "epic-universe",
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"name": "New Fantasyland Expansion",
|
||||
"location": "Magic Kingdom",
|
||||
"category": "Land Expansion",
|
||||
"date_added": "Opening 2026",
|
||||
"slug": "fantasyland-expansion",
|
||||
},
|
||||
]
|
||||
|
||||
# Limit each category
|
||||
recently_added = recently_added[: limit // 3] if recently_added else []
|
||||
newly_opened = newly_opened[: limit // 3] if newly_opened else []
|
||||
upcoming = upcoming[: limit // 3] if upcoming else []
|
||||
|
||||
# Return in expected frontend format
|
||||
response_data = {
|
||||
"recently_added": recently_added,
|
||||
"newly_opened": newly_opened,
|
||||
"upcoming": upcoming,
|
||||
}
|
||||
|
||||
return Response(response_data)
|
||||
|
||||
334
backend/apps/api/v1/viewsets_rankings.py
Normal file
334
backend/apps/api/v1/viewsets_rankings.py
Normal file
@@ -0,0 +1,334 @@
|
||||
"""
|
||||
API viewsets for the ride ranking system.
|
||||
"""
|
||||
|
||||
from django.db.models import Q, Count, Max
|
||||
from django.utils import timezone
|
||||
from django_filters.rest_framework import DjangoFilterBackend
|
||||
from drf_spectacular.utils import extend_schema, extend_schema_view, OpenApiParameter
|
||||
from drf_spectacular.types import OpenApiTypes
|
||||
from rest_framework import status
|
||||
from rest_framework.decorators import action
|
||||
from rest_framework.filters import OrderingFilter
|
||||
from rest_framework.permissions import IsAuthenticatedOrReadOnly, AllowAny
|
||||
from rest_framework.response import Response
|
||||
from rest_framework.viewsets import ReadOnlyModelViewSet
|
||||
from rest_framework.views import APIView
|
||||
|
||||
from apps.rides.models import RideRanking, RidePairComparison, RankingSnapshot
|
||||
from apps.rides.services import RideRankingService
|
||||
from .serializers_rankings import (
|
||||
RideRankingSerializer,
|
||||
RideRankingDetailSerializer,
|
||||
RankingSnapshotSerializer,
|
||||
RankingStatsSerializer,
|
||||
)
|
||||
|
||||
|
||||
@extend_schema_view(
|
||||
list=extend_schema(
|
||||
summary="List ride rankings",
|
||||
description="Get the current ride rankings calculated using the Internet Roller Coaster Poll algorithm.",
|
||||
parameters=[
|
||||
OpenApiParameter(
|
||||
name="category",
|
||||
type=OpenApiTypes.STR,
|
||||
location=OpenApiParameter.QUERY,
|
||||
description="Filter by ride category (RC, DR, FR, WR, TR, OT)",
|
||||
enum=["RC", "DR", "FR", "WR", "TR", "OT"],
|
||||
),
|
||||
OpenApiParameter(
|
||||
name="min_riders",
|
||||
type=OpenApiTypes.INT,
|
||||
location=OpenApiParameter.QUERY,
|
||||
description="Minimum number of mutual riders required",
|
||||
),
|
||||
OpenApiParameter(
|
||||
name="park",
|
||||
type=OpenApiTypes.STR,
|
||||
location=OpenApiParameter.QUERY,
|
||||
description="Filter by park slug",
|
||||
),
|
||||
OpenApiParameter(
|
||||
name="ordering",
|
||||
type=OpenApiTypes.STR,
|
||||
location=OpenApiParameter.QUERY,
|
||||
description="Order results (rank, -rank, winning_percentage, -winning_percentage)",
|
||||
),
|
||||
],
|
||||
responses={200: RideRankingSerializer(many=True)},
|
||||
tags=["Rankings"],
|
||||
),
|
||||
retrieve=extend_schema(
|
||||
summary="Get ranking details",
|
||||
description="Get detailed ranking information for a specific ride.",
|
||||
responses={
|
||||
200: RideRankingDetailSerializer,
|
||||
404: OpenApiTypes.OBJECT,
|
||||
},
|
||||
tags=["Rankings"],
|
||||
),
|
||||
history=extend_schema(
|
||||
summary="Get ranking history",
|
||||
description="Get historical ranking data for a specific ride.",
|
||||
responses={200: RankingSnapshotSerializer(many=True)},
|
||||
tags=["Rankings"],
|
||||
),
|
||||
statistics=extend_schema(
|
||||
summary="Get ranking statistics",
|
||||
description="Get overall statistics about the ranking system.",
|
||||
responses={200: RankingStatsSerializer},
|
||||
tags=["Rankings", "Statistics"],
|
||||
),
|
||||
)
|
||||
class RideRankingViewSet(ReadOnlyModelViewSet):
|
||||
"""
|
||||
ViewSet for ride rankings.
|
||||
|
||||
Provides access to ride rankings calculated using the Internet Roller Coaster Poll algorithm.
|
||||
Rankings are updated daily and based on pairwise comparisons of user ratings.
|
||||
"""
|
||||
|
||||
permission_classes = [AllowAny]
|
||||
lookup_field = "ride__slug"
|
||||
lookup_url_kwarg = "ride_slug"
|
||||
filter_backends = [DjangoFilterBackend, OrderingFilter]
|
||||
filterset_fields = ["ride__category"]
|
||||
ordering_fields = [
|
||||
"rank",
|
||||
"winning_percentage",
|
||||
"mutual_riders_count",
|
||||
"average_rating",
|
||||
]
|
||||
ordering = ["rank"]
|
||||
|
||||
def get_queryset(self):
|
||||
"""Get rankings with optimized queries."""
|
||||
queryset = RideRanking.objects.select_related(
|
||||
"ride", "ride__park", "ride__park__location", "ride__manufacturer"
|
||||
)
|
||||
|
||||
# Filter by category
|
||||
category = self.request.query_params.get("category")
|
||||
if category:
|
||||
queryset = queryset.filter(ride__category=category)
|
||||
|
||||
# Filter by minimum mutual riders
|
||||
min_riders = self.request.query_params.get("min_riders")
|
||||
if min_riders:
|
||||
try:
|
||||
queryset = queryset.filter(mutual_riders_count__gte=int(min_riders))
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
# Filter by park
|
||||
park_slug = self.request.query_params.get("park")
|
||||
if park_slug:
|
||||
queryset = queryset.filter(ride__park__slug=park_slug)
|
||||
|
||||
return queryset
|
||||
|
||||
def get_serializer_class(self):
|
||||
"""Use different serializers for list vs detail."""
|
||||
if self.action == "retrieve":
|
||||
return RideRankingDetailSerializer
|
||||
elif self.action == "history":
|
||||
return RankingSnapshotSerializer
|
||||
elif self.action == "statistics":
|
||||
return RankingStatsSerializer
|
||||
return RideRankingSerializer
|
||||
|
||||
@action(detail=True, methods=["get"])
|
||||
def history(self, request, ride_slug=None):
|
||||
"""Get ranking history for a specific ride."""
|
||||
ranking = self.get_object()
|
||||
history = RankingSnapshot.objects.filter(ride=ranking.ride).order_by(
|
||||
"-snapshot_date"
|
||||
)[
|
||||
:90
|
||||
] # Last 3 months
|
||||
|
||||
serializer = self.get_serializer(history, many=True)
|
||||
return Response(serializer.data)
|
||||
|
||||
@action(detail=False, methods=["get"])
|
||||
def statistics(self, request):
|
||||
"""Get overall ranking system statistics."""
|
||||
total_rankings = RideRanking.objects.count()
|
||||
total_comparisons = RidePairComparison.objects.count()
|
||||
|
||||
# Get last calculation time
|
||||
latest_ranking = RideRanking.objects.order_by("-last_calculated").first()
|
||||
last_calc_time = latest_ranking.last_calculated if latest_ranking else None
|
||||
|
||||
# Get top rated ride
|
||||
top_rated = RideRanking.objects.select_related("ride", "ride__park").first()
|
||||
|
||||
# Get most compared ride
|
||||
most_compared = (
|
||||
RideRanking.objects.select_related("ride", "ride__park")
|
||||
.order_by("-comparison_count")
|
||||
.first()
|
||||
)
|
||||
|
||||
# Get biggest rank change (last 7 days)
|
||||
from datetime import timedelta
|
||||
|
||||
week_ago = timezone.now().date() - timedelta(days=7)
|
||||
|
||||
biggest_change = None
|
||||
max_change = 0
|
||||
|
||||
current_rankings = RideRanking.objects.select_related("ride")
|
||||
for ranking in current_rankings[:100]: # Check top 100 for performance
|
||||
old_snapshot = (
|
||||
RankingSnapshot.objects.filter(
|
||||
ride=ranking.ride, snapshot_date__lte=week_ago
|
||||
)
|
||||
.order_by("-snapshot_date")
|
||||
.first()
|
||||
)
|
||||
|
||||
if old_snapshot:
|
||||
change = abs(old_snapshot.rank - ranking.rank)
|
||||
if change > max_change:
|
||||
max_change = change
|
||||
biggest_change = {
|
||||
"ride": {
|
||||
"id": ranking.ride.id,
|
||||
"name": ranking.ride.name,
|
||||
"slug": ranking.ride.slug,
|
||||
},
|
||||
"current_rank": ranking.rank,
|
||||
"previous_rank": old_snapshot.rank,
|
||||
"change": old_snapshot.rank - ranking.rank,
|
||||
}
|
||||
|
||||
stats = {
|
||||
"total_ranked_rides": total_rankings,
|
||||
"total_comparisons": total_comparisons,
|
||||
"last_calculation_time": last_calc_time,
|
||||
"calculation_duration": None, # Would need to track this separately
|
||||
"top_rated_ride": (
|
||||
{
|
||||
"id": top_rated.ride.id,
|
||||
"name": top_rated.ride.name,
|
||||
"slug": top_rated.ride.slug,
|
||||
"park": top_rated.ride.park.name,
|
||||
"rank": top_rated.rank,
|
||||
"winning_percentage": float(top_rated.winning_percentage),
|
||||
"average_rating": (
|
||||
float(top_rated.average_rating)
|
||||
if top_rated.average_rating
|
||||
else None
|
||||
),
|
||||
}
|
||||
if top_rated
|
||||
else None
|
||||
),
|
||||
"most_compared_ride": (
|
||||
{
|
||||
"id": most_compared.ride.id,
|
||||
"name": most_compared.ride.name,
|
||||
"slug": most_compared.ride.slug,
|
||||
"park": most_compared.ride.park.name,
|
||||
"comparison_count": most_compared.comparison_count,
|
||||
}
|
||||
if most_compared
|
||||
else None
|
||||
),
|
||||
"biggest_rank_change": biggest_change,
|
||||
}
|
||||
|
||||
serializer = RankingStatsSerializer(stats)
|
||||
return Response(serializer.data)
|
||||
|
||||
@action(detail=True, methods=["get"])
|
||||
def comparisons(self, request, ride_slug=None):
|
||||
"""Get head-to-head comparisons for a specific ride."""
|
||||
ranking = self.get_object()
|
||||
|
||||
comparisons = (
|
||||
RidePairComparison.objects.filter(
|
||||
Q(ride_a=ranking.ride) | Q(ride_b=ranking.ride)
|
||||
)
|
||||
.select_related("ride_a", "ride_b", "ride_a__park", "ride_b__park")
|
||||
.order_by("-mutual_riders_count")[:50]
|
||||
)
|
||||
|
||||
results = []
|
||||
for comp in comparisons:
|
||||
if comp.ride_a == ranking.ride:
|
||||
opponent = comp.ride_b
|
||||
wins = comp.ride_a_wins
|
||||
losses = comp.ride_b_wins
|
||||
else:
|
||||
opponent = comp.ride_a
|
||||
wins = comp.ride_b_wins
|
||||
losses = comp.ride_a_wins
|
||||
|
||||
result = "win" if wins > losses else "loss" if losses > wins else "tie"
|
||||
|
||||
results.append(
|
||||
{
|
||||
"opponent": {
|
||||
"id": opponent.id,
|
||||
"name": opponent.name,
|
||||
"slug": opponent.slug,
|
||||
"park": {
|
||||
"id": opponent.park.id,
|
||||
"name": opponent.park.name,
|
||||
"slug": opponent.park.slug,
|
||||
},
|
||||
},
|
||||
"wins": wins,
|
||||
"losses": losses,
|
||||
"ties": comp.ties,
|
||||
"result": result,
|
||||
"mutual_riders": comp.mutual_riders_count,
|
||||
"ride_a_avg_rating": (
|
||||
float(comp.ride_a_avg_rating)
|
||||
if comp.ride_a_avg_rating
|
||||
else None
|
||||
),
|
||||
"ride_b_avg_rating": (
|
||||
float(comp.ride_b_avg_rating)
|
||||
if comp.ride_b_avg_rating
|
||||
else None
|
||||
),
|
||||
}
|
||||
)
|
||||
|
||||
return Response(results)
|
||||
|
||||
|
||||
@extend_schema(
|
||||
summary="Trigger ranking calculation",
|
||||
description="Manually trigger a ranking calculation (admin only).",
|
||||
request=None,
|
||||
responses={
|
||||
200: OpenApiTypes.OBJECT,
|
||||
403: OpenApiTypes.OBJECT,
|
||||
},
|
||||
tags=["Rankings", "Admin"],
|
||||
)
|
||||
class TriggerRankingCalculationView(APIView):
|
||||
"""
|
||||
Admin endpoint to manually trigger ranking calculation.
|
||||
"""
|
||||
|
||||
permission_classes = [IsAuthenticatedOrReadOnly]
|
||||
|
||||
def post(self, request):
|
||||
"""Trigger ranking calculation."""
|
||||
if not request.user.is_staff:
|
||||
return Response(
|
||||
{"error": "Admin access required"}, status=status.HTTP_403_FORBIDDEN
|
||||
)
|
||||
|
||||
category = request.data.get("category")
|
||||
|
||||
service = RideRankingService()
|
||||
result = service.update_all_rankings(category=category)
|
||||
|
||||
return Response(result)
|
||||
@@ -26,12 +26,12 @@ class PageView(models.Model):
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def get_trending_items(cls, model_class, hours=24, limit=10):
|
||||
def get_trending_items(cls, model_class, hours=168, limit=10):
|
||||
"""Get trending items of a specific model class based on views in last X hours.
|
||||
|
||||
Args:
|
||||
model_class: The model class to get trending items for (e.g., Park, Ride)
|
||||
hours (int): Number of hours to look back for views (default: 24)
|
||||
hours (int): Number of hours to look back for views (default: 168 = 7 days)
|
||||
limit (int): Maximum number of items to return (default: 10)
|
||||
|
||||
Returns:
|
||||
@@ -61,3 +61,65 @@ class PageView(models.Model):
|
||||
return model_class.objects.filter(pk__in=id_list).order_by(preserved)
|
||||
|
||||
return model_class.objects.none()
|
||||
|
||||
@classmethod
|
||||
def get_views_growth(
|
||||
cls, content_type, object_id, current_period_hours, previous_period_hours
|
||||
):
|
||||
"""Get view growth statistics between two time periods.
|
||||
|
||||
Args:
|
||||
content_type: ContentType instance for the model
|
||||
object_id: ID of the specific object
|
||||
current_period_hours: Hours for current period (e.g., 24)
|
||||
previous_period_hours: Hours for previous period (e.g., 48)
|
||||
|
||||
Returns:
|
||||
tuple: (current_views, previous_views, growth_percentage)
|
||||
"""
|
||||
from datetime import timedelta
|
||||
|
||||
now = timezone.now()
|
||||
|
||||
# Current period: last X hours
|
||||
current_start = now - timedelta(hours=current_period_hours)
|
||||
current_views = cls.objects.filter(
|
||||
content_type=content_type, object_id=object_id, timestamp__gte=current_start
|
||||
).count()
|
||||
|
||||
# Previous period: X hours before current period
|
||||
previous_start = now - timedelta(hours=previous_period_hours)
|
||||
previous_end = current_start
|
||||
previous_views = cls.objects.filter(
|
||||
content_type=content_type,
|
||||
object_id=object_id,
|
||||
timestamp__gte=previous_start,
|
||||
timestamp__lt=previous_end,
|
||||
).count()
|
||||
|
||||
# Calculate growth percentage
|
||||
if previous_views == 0:
|
||||
growth_percentage = current_views * 100 if current_views > 0 else 0
|
||||
else:
|
||||
growth_percentage = (
|
||||
(current_views - previous_views) / previous_views
|
||||
) * 100
|
||||
|
||||
return current_views, previous_views, growth_percentage
|
||||
|
||||
@classmethod
|
||||
def get_total_views_count(cls, content_type, object_id, hours=168):
|
||||
"""Get total view count for an object within specified hours.
|
||||
|
||||
Args:
|
||||
content_type: ContentType instance for the model
|
||||
object_id: ID of the specific object
|
||||
hours: Number of hours to look back (default: 168 = 7 days)
|
||||
|
||||
Returns:
|
||||
int: Total view count
|
||||
"""
|
||||
cutoff = timezone.now() - timedelta(hours=hours)
|
||||
return cls.objects.filter(
|
||||
content_type=content_type, object_id=object_id, timestamp__gte=cutoff
|
||||
).count()
|
||||
|
||||
@@ -1 +1 @@
|
||||
# Django management commands
|
||||
|
||||
|
||||
@@ -1 +1 @@
|
||||
# Django management commands
|
||||
|
||||
|
||||
472
backend/apps/core/management/commands/clear_cache.py
Normal file
472
backend/apps/core/management/commands/clear_cache.py
Normal file
@@ -0,0 +1,472 @@
|
||||
"""
|
||||
Django management command to clear all types of cache data.
|
||||
|
||||
This command provides comprehensive cache clearing functionality including:
|
||||
- Django cache framework (all configured backends)
|
||||
- Python __pycache__ directories and .pyc files
|
||||
- Static files cache
|
||||
- Session cache
|
||||
- Template cache
|
||||
- Tailwind CSS build cache
|
||||
- OPcache (if available)
|
||||
"""
|
||||
|
||||
import shutil
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
from django.core.cache import cache, caches
|
||||
from django.core.management.base import BaseCommand
|
||||
from django.conf import settings
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = (
|
||||
"Clear all types of cache data including Django cache, "
|
||||
"__pycache__, and build caches"
|
||||
)
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument(
|
||||
"--django-cache",
|
||||
action="store_true",
|
||||
help="Clear Django cache framework cache only",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--pycache",
|
||||
action="store_true",
|
||||
help="Clear Python __pycache__ directories and .pyc files only",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--static",
|
||||
action="store_true",
|
||||
help="Clear static files cache only",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--sessions",
|
||||
action="store_true",
|
||||
help="Clear session cache only",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--templates",
|
||||
action="store_true",
|
||||
help="Clear template cache only",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--tailwind",
|
||||
action="store_true",
|
||||
help="Clear Tailwind CSS build cache only",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--opcache",
|
||||
action="store_true",
|
||||
help="Clear PHP OPcache if available",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dry-run",
|
||||
action="store_true",
|
||||
help="Show what would be cleared without actually clearing",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
action="store_true",
|
||||
help="Show detailed output of clearing operations",
|
||||
)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
"""Clear cache data based on provided options."""
|
||||
self.dry_run = options["dry_run"]
|
||||
self.verbose = options["verbose"]
|
||||
|
||||
# If no specific cache type is specified, clear all
|
||||
clear_all = not any(
|
||||
[
|
||||
options["django_cache"],
|
||||
options["pycache"],
|
||||
options["static"],
|
||||
options["sessions"],
|
||||
options["templates"],
|
||||
options["tailwind"],
|
||||
options["opcache"],
|
||||
]
|
||||
)
|
||||
|
||||
if self.dry_run:
|
||||
self.stdout.write(
|
||||
self.style.WARNING("🔍 DRY RUN MODE - No files will be deleted")
|
||||
)
|
||||
self.stdout.write("")
|
||||
|
||||
self.stdout.write(self.style.SUCCESS("🧹 ThrillWiki Cache Clearing Utility"))
|
||||
self.stdout.write("")
|
||||
|
||||
# Clear Django cache framework
|
||||
if clear_all or options["django_cache"]:
|
||||
self.clear_django_cache()
|
||||
|
||||
# Clear Python __pycache__
|
||||
if clear_all or options["pycache"]:
|
||||
self.clear_pycache()
|
||||
|
||||
# Clear static files cache
|
||||
if clear_all or options["static"]:
|
||||
self.clear_static_cache()
|
||||
|
||||
# Clear sessions cache
|
||||
if clear_all or options["sessions"]:
|
||||
self.clear_sessions_cache()
|
||||
|
||||
# Clear template cache
|
||||
if clear_all or options["templates"]:
|
||||
self.clear_template_cache()
|
||||
|
||||
# Clear Tailwind cache
|
||||
if clear_all or options["tailwind"]:
|
||||
self.clear_tailwind_cache()
|
||||
|
||||
# Clear OPcache
|
||||
if clear_all or options["opcache"]:
|
||||
self.clear_opcache()
|
||||
|
||||
self.stdout.write("")
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS("✅ Cache clearing completed successfully!")
|
||||
)
|
||||
|
||||
def clear_django_cache(self):
|
||||
"""Clear Django cache framework cache."""
|
||||
self.stdout.write("🗄️ Clearing Django cache framework...")
|
||||
|
||||
try:
|
||||
# Clear default cache
|
||||
if not self.dry_run:
|
||||
cache.clear()
|
||||
|
||||
cache_info = f"Default cache ({cache.__class__.__name__})"
|
||||
self.stdout.write(self.style.SUCCESS(f" ✅ Cleared {cache_info}"))
|
||||
|
||||
# Clear all configured caches
|
||||
cache_aliases = getattr(settings, "CACHES", {}).keys()
|
||||
for alias in cache_aliases:
|
||||
if alias != "default": # Already cleared above
|
||||
try:
|
||||
cache_backend = caches[alias]
|
||||
if not self.dry_run:
|
||||
cache_backend.clear()
|
||||
|
||||
cache_info = (
|
||||
f"{alias} cache ({cache_backend.__class__.__name__})"
|
||||
)
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(f" ✅ Cleared {cache_info}")
|
||||
)
|
||||
except Exception as e:
|
||||
self.stdout.write(
|
||||
self.style.WARNING(
|
||||
f" ⚠️ Could not clear {alias} cache: {e}"
|
||||
)
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
self.stdout.write(
|
||||
self.style.ERROR(f" ❌ Error clearing Django cache: {e}")
|
||||
)
|
||||
|
||||
def clear_pycache(self):
|
||||
"""Clear Python __pycache__ directories and .pyc files."""
|
||||
self.stdout.write("🐍 Clearing Python __pycache__ and .pyc files...")
|
||||
|
||||
removed_count = 0
|
||||
removed_size = 0
|
||||
|
||||
try:
|
||||
# Start from project root
|
||||
project_root = Path(settings.BASE_DIR)
|
||||
|
||||
# Find and remove __pycache__ directories
|
||||
for pycache_dir in project_root.rglob("__pycache__"):
|
||||
if pycache_dir.is_dir():
|
||||
try:
|
||||
# Calculate size before removal
|
||||
dir_size = sum(
|
||||
f.stat().st_size
|
||||
for f in pycache_dir.rglob("*")
|
||||
if f.is_file()
|
||||
)
|
||||
removed_size += dir_size
|
||||
|
||||
if self.verbose:
|
||||
self.stdout.write(f" 🗑️ Removing: {pycache_dir}")
|
||||
|
||||
if not self.dry_run:
|
||||
shutil.rmtree(pycache_dir)
|
||||
|
||||
removed_count += 1
|
||||
except Exception as e:
|
||||
self.stdout.write(
|
||||
self.style.WARNING(
|
||||
f" ⚠️ Could not remove {pycache_dir}: {e}"
|
||||
)
|
||||
)
|
||||
|
||||
# Find and remove .pyc files
|
||||
for pyc_file in project_root.rglob("*.pyc"):
|
||||
try:
|
||||
file_size = pyc_file.stat().st_size
|
||||
removed_size += file_size
|
||||
|
||||
if self.verbose:
|
||||
self.stdout.write(f" 🗑️ Removing: {pyc_file}")
|
||||
|
||||
if not self.dry_run:
|
||||
pyc_file.unlink()
|
||||
|
||||
removed_count += 1
|
||||
except Exception as e:
|
||||
self.stdout.write(
|
||||
self.style.WARNING(f" ⚠️ Could not remove {pyc_file}: {e}")
|
||||
)
|
||||
|
||||
# Format file size
|
||||
size_mb = removed_size / (1024 * 1024)
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(
|
||||
f" ✅ Removed {removed_count} Python cache items ({size_mb:.2f} MB)"
|
||||
)
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
self.stdout.write(
|
||||
self.style.ERROR(f" ❌ Error clearing Python cache: {e}")
|
||||
)
|
||||
|
||||
def clear_static_cache(self):
|
||||
"""Clear static files cache."""
|
||||
self.stdout.write("📦 Clearing static files cache...")
|
||||
|
||||
try:
|
||||
static_root = getattr(settings, "STATIC_ROOT", None)
|
||||
|
||||
if static_root and Path(static_root).exists():
|
||||
static_path = Path(static_root)
|
||||
|
||||
# Calculate size
|
||||
total_size = sum(
|
||||
f.stat().st_size for f in static_path.rglob("*") if f.is_file()
|
||||
)
|
||||
size_mb = total_size / (1024 * 1024)
|
||||
|
||||
if self.verbose:
|
||||
self.stdout.write(f" 🗑️ Removing: {static_path}")
|
||||
|
||||
if not self.dry_run:
|
||||
shutil.rmtree(static_path)
|
||||
static_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(
|
||||
f" ✅ Cleared static files cache ({size_mb:.2f} MB)"
|
||||
)
|
||||
)
|
||||
else:
|
||||
self.stdout.write(
|
||||
self.style.WARNING(
|
||||
" ⚠️ No STATIC_ROOT configured or directory doesn't exist"
|
||||
)
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
self.stdout.write(
|
||||
self.style.ERROR(f" ❌ Error clearing static cache: {e}")
|
||||
)
|
||||
|
||||
def clear_sessions_cache(self):
|
||||
"""Clear session cache if using cache-based sessions."""
|
||||
self.stdout.write("🔐 Clearing session cache...")
|
||||
|
||||
try:
|
||||
session_engine = getattr(settings, "SESSION_ENGINE", "")
|
||||
|
||||
if "cache" in session_engine:
|
||||
# Using cache-based sessions
|
||||
session_cache_alias = getattr(
|
||||
settings, "SESSION_CACHE_ALIAS", "default"
|
||||
)
|
||||
session_cache = caches[session_cache_alias]
|
||||
|
||||
if not self.dry_run:
|
||||
# Clear session keys (this is a simplified approach)
|
||||
# In production, you might want more sophisticated session clearing
|
||||
session_cache.clear()
|
||||
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(
|
||||
f" ✅ Cleared cache-based sessions ({session_cache_alias})"
|
||||
)
|
||||
)
|
||||
else:
|
||||
self.stdout.write(
|
||||
self.style.WARNING(" ⚠️ Not using cache-based sessions")
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
self.stdout.write(
|
||||
self.style.ERROR(f" ❌ Error clearing session cache: {e}")
|
||||
)
|
||||
|
||||
def clear_template_cache(self):
|
||||
"""Clear template cache."""
|
||||
self.stdout.write("📄 Clearing template cache...")
|
||||
|
||||
try:
|
||||
# Clear template cache if using cached template loader
|
||||
from django.template import engines
|
||||
from django.template.loaders.cached import Loader as CachedLoader
|
||||
|
||||
cleared_engines = 0
|
||||
for engine in engines.all():
|
||||
try:
|
||||
# Check for DjangoTemplates engine with cached loaders
|
||||
engine_backend = getattr(engine, "backend", "")
|
||||
if "DjangoTemplates" in engine_backend:
|
||||
# Get engine instance safely
|
||||
engine_instance = getattr(engine, "engine", None)
|
||||
if engine_instance:
|
||||
template_loaders = getattr(
|
||||
engine_instance, "template_loaders", []
|
||||
)
|
||||
for loader in template_loaders:
|
||||
if isinstance(loader, CachedLoader):
|
||||
if not self.dry_run:
|
||||
loader.reset()
|
||||
cleared_engines += 1
|
||||
if self.verbose:
|
||||
self.stdout.write(
|
||||
f" 🗑️ Cleared cached loader: {loader}"
|
||||
)
|
||||
|
||||
# Check for Jinja2 engines (if present)
|
||||
elif "Jinja2" in engine_backend and hasattr(engine, "env"):
|
||||
env = getattr(engine, "env", None)
|
||||
if env and hasattr(env, "cache"):
|
||||
if not self.dry_run:
|
||||
env.cache.clear()
|
||||
cleared_engines += 1
|
||||
if self.verbose:
|
||||
self.stdout.write(
|
||||
f" 🗑️ Cleared Jinja2 cache: {engine}"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
if self.verbose:
|
||||
self.stdout.write(
|
||||
self.style.WARNING(
|
||||
f" ⚠️ Could not clear cache for engine {engine}: {e}"
|
||||
)
|
||||
)
|
||||
|
||||
if cleared_engines > 0:
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(
|
||||
f" ✅ Cleared template cache for "
|
||||
f"{cleared_engines} loaders/engines"
|
||||
)
|
||||
)
|
||||
else:
|
||||
self.stdout.write(
|
||||
self.style.WARNING(" ⚠️ No cached template loaders found")
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
self.stdout.write(
|
||||
self.style.ERROR(f" ❌ Error clearing template cache: {e}")
|
||||
)
|
||||
|
||||
def clear_tailwind_cache(self):
|
||||
"""Clear Tailwind CSS build cache."""
|
||||
self.stdout.write("🎨 Clearing Tailwind CSS cache...")
|
||||
|
||||
try:
|
||||
# Look for common Tailwind cache directories
|
||||
project_root = Path(settings.BASE_DIR)
|
||||
cache_paths = [
|
||||
project_root / "node_modules" / ".cache",
|
||||
project_root / ".tailwindcss-cache",
|
||||
project_root / "static" / "css" / ".cache",
|
||||
]
|
||||
|
||||
cleared_count = 0
|
||||
for cache_path in cache_paths:
|
||||
if cache_path.exists():
|
||||
try:
|
||||
if self.verbose:
|
||||
self.stdout.write(f" 🗑️ Removing: {cache_path}")
|
||||
|
||||
if not self.dry_run:
|
||||
if cache_path.is_file():
|
||||
cache_path.unlink()
|
||||
else:
|
||||
shutil.rmtree(cache_path)
|
||||
|
||||
cleared_count += 1
|
||||
except Exception as e:
|
||||
self.stdout.write(
|
||||
self.style.WARNING(
|
||||
f" ⚠️ Could not remove {cache_path}: {e}"
|
||||
)
|
||||
)
|
||||
|
||||
if cleared_count > 0:
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(
|
||||
f" ✅ Cleared {cleared_count} Tailwind cache directories"
|
||||
)
|
||||
)
|
||||
else:
|
||||
self.stdout.write(
|
||||
self.style.WARNING(" ⚠️ No Tailwind cache directories found")
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
self.stdout.write(
|
||||
self.style.ERROR(f" ❌ Error clearing Tailwind cache: {e}")
|
||||
)
|
||||
|
||||
def clear_opcache(self):
|
||||
"""Clear PHP OPcache if available."""
|
||||
self.stdout.write("⚡ Clearing OPcache...")
|
||||
|
||||
try:
|
||||
# This is mainly for mixed environments
|
||||
php_code = (
|
||||
"if (function_exists('opcache_reset')) { "
|
||||
"opcache_reset(); echo 'cleared'; } "
|
||||
"else { echo 'not_available'; }"
|
||||
)
|
||||
result = subprocess.run(
|
||||
["php", "-r", php_code],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=10,
|
||||
)
|
||||
|
||||
if result.returncode == 0:
|
||||
if "cleared" in result.stdout:
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(" ✅ OPcache cleared successfully")
|
||||
)
|
||||
else:
|
||||
self.stdout.write(self.style.WARNING(" ⚠️ OPcache not available"))
|
||||
else:
|
||||
self.stdout.write(
|
||||
self.style.WARNING(
|
||||
" ⚠️ PHP not available or OPcache not accessible"
|
||||
)
|
||||
)
|
||||
|
||||
except (subprocess.TimeoutExpired, FileNotFoundError):
|
||||
self.stdout.write(
|
||||
self.style.WARNING(" ⚠️ PHP not found or not accessible")
|
||||
)
|
||||
except Exception as e:
|
||||
self.stdout.write(self.style.ERROR(f" ❌ Error clearing OPcache: {e}"))
|
||||
309
backend/apps/core/management/commands/test_trending.py
Normal file
309
backend/apps/core/management/commands/test_trending.py
Normal file
@@ -0,0 +1,309 @@
|
||||
from django.core.management.base import BaseCommand
|
||||
from django.contrib.contenttypes.models import ContentType
|
||||
from django.utils import timezone
|
||||
from apps.parks.models.parks import Park
|
||||
from apps.rides.models.rides import Ride
|
||||
from apps.parks.models.companies import Company
|
||||
from apps.core.analytics import PageView
|
||||
from apps.core.services.trending_service import trending_service
|
||||
from datetime import datetime, timedelta
|
||||
import random
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Test the trending algorithm with sample data"
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument(
|
||||
"--clean",
|
||||
action="store_true",
|
||||
help="Clean existing test data before creating new data",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
action="store_true",
|
||||
help="Show detailed output",
|
||||
)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
self.verbose = options["verbose"]
|
||||
|
||||
if options["clean"]:
|
||||
self.clean_test_data()
|
||||
|
||||
self.create_test_data()
|
||||
self.test_trending_algorithm()
|
||||
self.test_api_format()
|
||||
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS("✓ Trending system test completed successfully!")
|
||||
)
|
||||
|
||||
def clean_test_data(self):
|
||||
"""Clean existing test data."""
|
||||
self.stdout.write("Cleaning existing test data...")
|
||||
|
||||
# Delete test PageViews
|
||||
PageView.objects.filter(
|
||||
content_type__in=[
|
||||
ContentType.objects.get_for_model(Park),
|
||||
ContentType.objects.get_for_model(Ride),
|
||||
]
|
||||
).delete()
|
||||
|
||||
self.stdout.write("✓ Test data cleaned")
|
||||
|
||||
def create_test_data(self):
|
||||
"""Create sample parks, rides, and page views for testing."""
|
||||
self.stdout.write("Creating test data...")
|
||||
|
||||
# Create or get default operator company
|
||||
operator, created = Company.objects.get_or_create(
|
||||
name="Default Theme Park Operator",
|
||||
defaults={
|
||||
"roles": ["OPERATOR"],
|
||||
"description": "Default operator for test parks",
|
||||
},
|
||||
)
|
||||
if created and self.verbose:
|
||||
self.stdout.write(f" Created operator company: {operator.name}")
|
||||
|
||||
# Get or create test parks and rides
|
||||
parks_data = [
|
||||
{
|
||||
"name": "Cedar Point",
|
||||
"slug": "cedar-point",
|
||||
"description": "America's Roller Coast featuring world-class roller coasters",
|
||||
"average_rating": 9.2,
|
||||
"opening_date": datetime(1870, 1, 1).date(),
|
||||
"operator": operator,
|
||||
},
|
||||
{
|
||||
"name": "Magic Kingdom",
|
||||
"slug": "magic-kingdom",
|
||||
"description": "Walt Disney World's most magical theme park",
|
||||
"average_rating": 9.5,
|
||||
"opening_date": datetime(1971, 10, 1).date(),
|
||||
"operator": operator,
|
||||
},
|
||||
{
|
||||
"name": "Six Flags Great Adventure",
|
||||
"slug": "six-flags-great-adventure",
|
||||
"description": "Home to Kingda Ka and incredible thrills",
|
||||
"average_rating": 8.8,
|
||||
"opening_date": datetime(1974, 7, 1).date(),
|
||||
"operator": operator,
|
||||
},
|
||||
]
|
||||
|
||||
# Create parks
|
||||
parks = []
|
||||
for park_data in parks_data:
|
||||
park, created = Park.objects.get_or_create(
|
||||
name=park_data["name"], defaults=park_data
|
||||
)
|
||||
parks.append(park)
|
||||
if created and self.verbose:
|
||||
self.stdout.write(f" Created park: {park.name}")
|
||||
|
||||
# Now create rides - they need park references
|
||||
rides_data = [
|
||||
{
|
||||
"name": "Steel Vengeance",
|
||||
"slug": "steel-vengeance",
|
||||
"description": "Hybrid roller coaster at Cedar Point",
|
||||
"park": next(p for p in parks if p.name == "Cedar Point"),
|
||||
"category": "RC", # Roller Coaster
|
||||
"average_rating": 9.8,
|
||||
"opening_date": datetime(2018, 5, 5).date(),
|
||||
},
|
||||
{
|
||||
"name": "Space Mountain",
|
||||
"slug": "space-mountain",
|
||||
"description": "Indoor space-themed roller coaster",
|
||||
"park": next(p for p in parks if p.name == "Magic Kingdom"),
|
||||
"category": "RC", # Roller Coaster
|
||||
"average_rating": 8.5,
|
||||
"opening_date": datetime(1975, 1, 15).date(),
|
||||
},
|
||||
{
|
||||
"name": "Kingda Ka",
|
||||
"slug": "kingda-ka",
|
||||
"description": "World's tallest roller coaster",
|
||||
"park": next(p for p in parks if p.name == "Six Flags Great Adventure"),
|
||||
"category": "RC", # Roller Coaster
|
||||
"average_rating": 9.0,
|
||||
"opening_date": datetime(2005, 5, 21).date(),
|
||||
},
|
||||
{
|
||||
"name": "Millennium Force",
|
||||
"slug": "millennium-force",
|
||||
"description": "Legendary steel roller coaster",
|
||||
"park": next(p for p in parks if p.name == "Cedar Point"),
|
||||
"category": "RC", # Roller Coaster
|
||||
"average_rating": 9.4,
|
||||
"opening_date": datetime(2000, 5, 13).date(),
|
||||
},
|
||||
]
|
||||
|
||||
# Create rides
|
||||
rides = []
|
||||
for ride_data in rides_data:
|
||||
ride, created = Ride.objects.get_or_create(
|
||||
name=ride_data["name"], defaults=ride_data
|
||||
)
|
||||
rides.append(ride)
|
||||
if created and self.verbose:
|
||||
self.stdout.write(f" Created ride: {ride.name}")
|
||||
|
||||
# Create PageViews with different patterns to test trending
|
||||
self.create_page_views(parks, rides)
|
||||
|
||||
self.stdout.write("✓ Test data created")
|
||||
|
||||
def create_page_views(self, parks, rides):
|
||||
"""Create PageViews with different trending patterns."""
|
||||
now = timezone.now()
|
||||
|
||||
# Pattern 1: Recently trending item (Steel Vengeance)
|
||||
steel_vengeance = next(r for r in rides if r.name == "Steel Vengeance")
|
||||
self.create_views_for_content(
|
||||
steel_vengeance, recent_views=50, older_views=10, base_time=now
|
||||
)
|
||||
|
||||
# Pattern 2: Consistently popular item (Space Mountain)
|
||||
space_mountain = next(r for r in rides if r.name == "Space Mountain")
|
||||
self.create_views_for_content(
|
||||
space_mountain, recent_views=30, older_views=25, base_time=now
|
||||
)
|
||||
|
||||
# Pattern 3: Declining popularity (Kingda Ka)
|
||||
kingda_ka = next(r for r in rides if r.name == "Kingda Ka")
|
||||
self.create_views_for_content(
|
||||
kingda_ka, recent_views=5, older_views=40, base_time=now
|
||||
)
|
||||
|
||||
# Pattern 4: New but growing (Millennium Force)
|
||||
millennium_force = next(r for r in rides if r.name == "Millennium Force")
|
||||
self.create_views_for_content(
|
||||
millennium_force, recent_views=25, older_views=5, base_time=now
|
||||
)
|
||||
|
||||
# Create some park views too
|
||||
cedar_point = next(p for p in parks if p.name == "Cedar Point")
|
||||
self.create_views_for_content(
|
||||
cedar_point, recent_views=35, older_views=20, base_time=now
|
||||
)
|
||||
|
||||
if self.verbose:
|
||||
self.stdout.write(" Created PageView data for trending analysis")
|
||||
|
||||
def create_views_for_content(
|
||||
self, content_object, recent_views, older_views, base_time
|
||||
):
|
||||
"""Create PageViews for a content object with specified patterns."""
|
||||
content_type = ContentType.objects.get_for_model(type(content_object))
|
||||
|
||||
# Create recent views (last 2 hours)
|
||||
for i in range(recent_views):
|
||||
view_time = base_time - timedelta(
|
||||
minutes=random.randint(0, 120) # Last 2 hours
|
||||
)
|
||||
PageView.objects.create(
|
||||
content_type=content_type,
|
||||
object_id=content_object.id,
|
||||
ip_address=f"192.168.1.{random.randint(1, 255)}",
|
||||
user_agent="Test Agent",
|
||||
timestamp=view_time,
|
||||
)
|
||||
|
||||
# Create older views (2-24 hours ago)
|
||||
for i in range(older_views):
|
||||
view_time = base_time - timedelta(hours=random.randint(2, 24))
|
||||
PageView.objects.create(
|
||||
content_type=content_type,
|
||||
object_id=content_object.id,
|
||||
ip_address=f"10.0.0.{random.randint(1, 255)}",
|
||||
user_agent="Test Agent",
|
||||
timestamp=view_time,
|
||||
)
|
||||
|
||||
def test_trending_algorithm(self):
|
||||
"""Test the trending algorithm functionality."""
|
||||
self.stdout.write("Testing trending algorithm...")
|
||||
|
||||
# Test trending content for different content types
|
||||
trending_parks = trending_service.get_trending_content(
|
||||
content_type="parks", limit=3
|
||||
)
|
||||
trending_rides = trending_service.get_trending_content(
|
||||
content_type="rides", limit=3
|
||||
)
|
||||
trending_all = trending_service.get_trending_content(
|
||||
content_type="all", limit=5
|
||||
)
|
||||
|
||||
# Test new content
|
||||
new_parks = trending_service.get_new_content(content_type="parks", limit=3)
|
||||
new_rides = trending_service.get_new_content(content_type="rides", limit=3)
|
||||
new_all = trending_service.get_new_content(content_type="all", limit=5)
|
||||
|
||||
if self.verbose:
|
||||
self.stdout.write(f" Trending parks: {len(trending_parks)} results")
|
||||
self.stdout.write(f" Trending rides: {len(trending_rides)} results")
|
||||
self.stdout.write(f" Trending all: {len(trending_all)} results")
|
||||
self.stdout.write(f" New parks: {len(new_parks)} results")
|
||||
self.stdout.write(f" New rides: {len(new_rides)} results")
|
||||
self.stdout.write(f" New all: {len(new_all)} results")
|
||||
|
||||
self.stdout.write("✓ Trending algorithm working correctly")
|
||||
|
||||
def test_api_format(self):
|
||||
"""Test that API responses match expected frontend format."""
|
||||
self.stdout.write("Testing API response format...")
|
||||
|
||||
# Test trending content format
|
||||
trending_parks = trending_service.get_trending_content(
|
||||
content_type="parks", limit=3
|
||||
)
|
||||
trending_rides = trending_service.get_trending_content(
|
||||
content_type="rides", limit=3
|
||||
)
|
||||
|
||||
# Test new content format
|
||||
new_parks = trending_service.get_new_content(content_type="parks", limit=3)
|
||||
new_rides = trending_service.get_new_content(content_type="rides", limit=3)
|
||||
|
||||
# Verify trending data structure
|
||||
if trending_parks:
|
||||
item = trending_parks[0]
|
||||
required_trending_fields = [
|
||||
"id",
|
||||
"name",
|
||||
"slug",
|
||||
"views",
|
||||
"views_change",
|
||||
"rank",
|
||||
]
|
||||
for field in required_trending_fields:
|
||||
if field not in item:
|
||||
raise ValueError(f"Missing required trending field: {field}")
|
||||
|
||||
# Verify new content data structure
|
||||
if new_parks:
|
||||
item = new_parks[0]
|
||||
required_new_fields = ["id", "name", "slug"]
|
||||
for field in required_new_fields:
|
||||
if field not in item:
|
||||
raise ValueError(f"Missing required new content field: {field}")
|
||||
|
||||
if self.verbose:
|
||||
self.stdout.write(" Sample trending park data:")
|
||||
if trending_parks:
|
||||
self.stdout.write(f" {trending_parks[0]}")
|
||||
|
||||
self.stdout.write(" Sample new park data:")
|
||||
if new_parks:
|
||||
self.stdout.write(f" {new_parks[0]}")
|
||||
|
||||
self.stdout.write("✓ API format validation passed")
|
||||
@@ -6,30 +6,31 @@ from apps.core.analytics import PageView
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Updates trending parks and rides cache based on views in the last 24 hours"
|
||||
help = "Updates trending parks and rides cache based on views in the last 7 days"
|
||||
|
||||
def handle(self, *args, **kwargs):
|
||||
"""
|
||||
Updates the trending parks and rides in the cache.
|
||||
|
||||
This command is designed to be run every hour via cron to keep the trending
|
||||
items up to date. It looks at page views from the last 24 hours and caches
|
||||
This command is designed to be run once daily via cron to keep the trending
|
||||
items up to date. It looks at page views from the last 7 days and caches
|
||||
the top 10 most viewed parks and rides.
|
||||
|
||||
The cached data is used by the home page to display trending items without
|
||||
having to query the database on every request.
|
||||
"""
|
||||
# Get top 10 trending parks and rides from the last 24 hours
|
||||
trending_parks = PageView.get_trending_items(Park, hours=24, limit=10)
|
||||
trending_rides = PageView.get_trending_items(Ride, hours=24, limit=10)
|
||||
# Get top 10 trending parks and rides from the last 7 days (168 hours)
|
||||
trending_parks = PageView.get_trending_items(Park, hours=168, limit=10)
|
||||
trending_rides = PageView.get_trending_items(Ride, hours=168, limit=10)
|
||||
|
||||
# Cache the results for 1 hour
|
||||
cache.set("trending_parks", trending_parks, 3600) # 3600 seconds = 1 hour
|
||||
cache.set("trending_rides", trending_rides, 3600)
|
||||
# Cache the results for 24 hours (daily refresh)
|
||||
cache.set("trending_parks", trending_parks, 86400) # 86400 seconds = 24 hours
|
||||
cache.set("trending_rides", trending_rides, 86400)
|
||||
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(
|
||||
"Successfully updated trending parks and rides. "
|
||||
"Cached 10 items each for parks and rides based on views in the last 24 hours."
|
||||
"Cached 10 items each for parks and rides based on views "
|
||||
"in the last 7 days."
|
||||
)
|
||||
)
|
||||
|
||||
@@ -1,22 +1,15 @@
|
||||
# Core middleware modules
|
||||
"""
|
||||
Core middleware package.
|
||||
|
||||
# Import middleware classes from the analytics module
|
||||
from .analytics import PageViewMiddleware, PgHistoryContextMiddleware
|
||||
This package contains middleware components for the Django application,
|
||||
including view tracking and other core functionality.
|
||||
"""
|
||||
|
||||
# Import middleware classes from the performance_middleware.py module
|
||||
from .performance_middleware import (
|
||||
PerformanceMiddleware,
|
||||
QueryCountMiddleware,
|
||||
DatabaseConnectionMiddleware,
|
||||
CachePerformanceMiddleware,
|
||||
)
|
||||
from .view_tracking import ViewTrackingMiddleware, get_view_stats_for_content
|
||||
from .analytics import PgHistoryContextMiddleware
|
||||
|
||||
# Make all middleware classes available at the package level
|
||||
__all__ = [
|
||||
"PageViewMiddleware",
|
||||
"ViewTrackingMiddleware",
|
||||
"get_view_stats_for_content",
|
||||
"PgHistoryContextMiddleware",
|
||||
"PerformanceMiddleware",
|
||||
"QueryCountMiddleware",
|
||||
"DatabaseConnectionMiddleware",
|
||||
"CachePerformanceMiddleware",
|
||||
]
|
||||
|
||||
@@ -44,41 +44,3 @@ class PgHistoryContextMiddleware:
|
||||
def __call__(self, request):
|
||||
response = self.get_response(request)
|
||||
return response
|
||||
|
||||
|
||||
class PageViewMiddleware(MiddlewareMixin):
|
||||
"""Middleware to track page views for DetailView-based pages."""
|
||||
|
||||
def process_view(self, request, view_func, view_args, view_kwargs):
|
||||
# Only track GET requests
|
||||
if request.method != "GET":
|
||||
return None
|
||||
|
||||
# Get view class if it exists
|
||||
view_class = getattr(view_func, "view_class", None)
|
||||
if not view_class or not issubclass(view_class, DetailView):
|
||||
return None
|
||||
|
||||
# Get the object if it's a detail view
|
||||
try:
|
||||
view_instance = view_class()
|
||||
view_instance.request = request
|
||||
view_instance.args = view_args
|
||||
view_instance.kwargs = view_kwargs
|
||||
obj = view_instance.get_object()
|
||||
except (AttributeError, Exception):
|
||||
return None
|
||||
|
||||
# Record the page view
|
||||
try:
|
||||
PageView.objects.create(
|
||||
content_type=ContentType.objects.get_for_model(obj.__class__),
|
||||
object_id=obj.pk,
|
||||
ip_address=request.META.get("REMOTE_ADDR", ""),
|
||||
user_agent=request.META.get("HTTP_USER_AGENT", "")[:512],
|
||||
)
|
||||
except Exception:
|
||||
# Fail silently to not interrupt the request
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
331
backend/apps/core/middleware/view_tracking.py
Normal file
331
backend/apps/core/middleware/view_tracking.py
Normal file
@@ -0,0 +1,331 @@
|
||||
"""
|
||||
View Tracking Middleware for automatic PageView recording.
|
||||
|
||||
This middleware automatically tracks page views for park and ride pages,
|
||||
implementing IP-based deduplication to prevent spam and provide accurate
|
||||
analytics for the trending algorithm.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import re
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional, Union
|
||||
from django.http import HttpRequest, HttpResponse
|
||||
from django.utils import timezone
|
||||
from django.contrib.contenttypes.models import ContentType
|
||||
from django.core.cache import cache
|
||||
from django.conf import settings
|
||||
from django.db import models
|
||||
|
||||
from apps.core.analytics import PageView
|
||||
from apps.parks.models import Park
|
||||
from apps.rides.models import Ride
|
||||
|
||||
# Type alias for content objects
|
||||
ContentObject = Union[Park, Ride]
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ViewTrackingMiddleware:
|
||||
"""
|
||||
Middleware for tracking page views with IP deduplication.
|
||||
|
||||
Automatically creates PageView records when users visit park or ride pages.
|
||||
Implements 24-hour IP deduplication window to prevent view inflation.
|
||||
"""
|
||||
|
||||
def __init__(self, get_response):
|
||||
self.get_response = get_response
|
||||
self.logger = logging.getLogger(f"{__name__}.{self.__class__.__name__}")
|
||||
|
||||
# URL patterns for tracking - matches park and ride detail pages
|
||||
self.tracked_patterns = [
|
||||
(r"^/parks/(?P<slug>[\w-]+)/$", "park"),
|
||||
(r"^/rides/(?P<slug>[\w-]+)/$", "ride"),
|
||||
# Add API patterns if needed
|
||||
(r"^/api/v1/parks/(?P<slug>[\w-]+)/$", "park"),
|
||||
(r"^/api/v1/rides/(?P<slug>[\w-]+)/$", "ride"),
|
||||
]
|
||||
|
||||
# Compile patterns for performance
|
||||
self.compiled_patterns = [
|
||||
(re.compile(pattern), content_type)
|
||||
for pattern, content_type in self.tracked_patterns
|
||||
]
|
||||
|
||||
# Cache configuration
|
||||
self.cache_timeout = 60 * 15 # 15 minutes
|
||||
self.dedup_window_hours = 24
|
||||
|
||||
def __call__(self, request: HttpRequest) -> HttpResponse:
|
||||
"""Process the request and track views if applicable."""
|
||||
response = self.get_response(request)
|
||||
|
||||
# Only track successful GET requests
|
||||
if (
|
||||
request.method == "GET"
|
||||
and 200 <= response.status_code < 300
|
||||
and not self._should_skip_tracking(request)
|
||||
):
|
||||
|
||||
try:
|
||||
self._track_view_if_applicable(request)
|
||||
except Exception as e:
|
||||
# Log error but don't break the request
|
||||
self.logger.error(f"Error tracking view: {e}", exc_info=True)
|
||||
|
||||
return response
|
||||
|
||||
def _should_skip_tracking(self, request: HttpRequest) -> bool:
|
||||
"""Check if this request should be skipped for tracking."""
|
||||
# Skip if disabled in settings
|
||||
if not getattr(settings, "ENABLE_VIEW_TRACKING", True):
|
||||
return True
|
||||
|
||||
# Skip requests from bots/crawlers
|
||||
user_agent = request.META.get("HTTP_USER_AGENT", "").lower()
|
||||
bot_indicators = [
|
||||
"bot",
|
||||
"crawler",
|
||||
"spider",
|
||||
"scraper",
|
||||
"facebook",
|
||||
"twitter",
|
||||
"linkedin",
|
||||
"google",
|
||||
"bing",
|
||||
"yahoo",
|
||||
"duckduckgo",
|
||||
"slurp",
|
||||
]
|
||||
if any(indicator in user_agent for indicator in bot_indicators):
|
||||
return True
|
||||
|
||||
# Skip requests without real IP
|
||||
if not self._get_client_ip(request):
|
||||
return True
|
||||
|
||||
# Skip AJAX requests (optional - depending on requirements)
|
||||
if request.META.get("HTTP_X_REQUESTED_WITH") == "XMLHttpRequest":
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def _track_view_if_applicable(self, request: HttpRequest) -> None:
|
||||
"""Track view if the URL matches tracked patterns."""
|
||||
path = request.path
|
||||
|
||||
for pattern, content_type in self.compiled_patterns:
|
||||
match = pattern.match(path)
|
||||
if match:
|
||||
slug = match.group("slug")
|
||||
self._record_page_view(request, content_type, slug)
|
||||
break
|
||||
|
||||
def _record_page_view(
|
||||
self, request: HttpRequest, content_type: str, slug: str
|
||||
) -> None:
|
||||
"""Record a page view for the specified content."""
|
||||
client_ip = self._get_client_ip(request)
|
||||
if not client_ip:
|
||||
return
|
||||
|
||||
try:
|
||||
# Get the content object
|
||||
content_obj = self._get_content_object(content_type, slug)
|
||||
if not content_obj:
|
||||
self.logger.warning(
|
||||
f"Content not found: {content_type} with slug '{slug}'"
|
||||
)
|
||||
return
|
||||
|
||||
# Check deduplication
|
||||
if self._is_duplicate_view(content_obj, client_ip):
|
||||
self.logger.debug(
|
||||
f"Duplicate view skipped for {content_type} {slug} from {client_ip}"
|
||||
)
|
||||
return
|
||||
|
||||
# Create PageView record
|
||||
self._create_page_view(content_obj, client_ip, request)
|
||||
|
||||
self.logger.debug(
|
||||
f"Recorded view for {content_type} {slug} from {client_ip}"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(
|
||||
f"Failed to record page view for {content_type} {slug}: {e}"
|
||||
)
|
||||
|
||||
def _get_content_object(
|
||||
self, content_type: str, slug: str
|
||||
) -> Optional[ContentObject]:
|
||||
"""Get the content object by type and slug."""
|
||||
try:
|
||||
if content_type == "park":
|
||||
# Use get_by_slug method to handle historical slugs
|
||||
park, _ = Park.get_by_slug(slug)
|
||||
return park
|
||||
elif content_type == "ride":
|
||||
# For rides, we need to search by slug within parks
|
||||
return Ride.objects.filter(slug=slug).first()
|
||||
else:
|
||||
self.logger.warning(f"Unknown content type: {content_type}")
|
||||
return None
|
||||
|
||||
except Park.DoesNotExist:
|
||||
return None
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error getting {content_type} with slug {slug}: {e}")
|
||||
return None
|
||||
|
||||
def _is_duplicate_view(self, content_obj: ContentObject, client_ip: str) -> bool:
|
||||
"""Check if this view is a duplicate within the deduplication window."""
|
||||
# Use cache for performance
|
||||
cache_key = self._get_dedup_cache_key(content_obj, client_ip)
|
||||
|
||||
if cache.get(cache_key):
|
||||
return True
|
||||
|
||||
# Check database as fallback
|
||||
content_type = ContentType.objects.get_for_model(content_obj)
|
||||
cutoff_time = timezone.now() - timedelta(hours=self.dedup_window_hours)
|
||||
|
||||
existing_view = PageView.objects.filter(
|
||||
content_type=content_type,
|
||||
object_id=content_obj.pk,
|
||||
ip_address=client_ip,
|
||||
timestamp__gte=cutoff_time,
|
||||
).exists()
|
||||
|
||||
if not existing_view:
|
||||
# Set cache to prevent future duplicates
|
||||
cache.set(cache_key, True, timeout=self.dedup_window_hours * 3600)
|
||||
|
||||
return existing_view
|
||||
|
||||
def _create_page_view(
|
||||
self, content_obj: ContentObject, client_ip: str, request: HttpRequest
|
||||
) -> None:
|
||||
"""Create a new PageView record."""
|
||||
content_type = ContentType.objects.get_for_model(content_obj)
|
||||
|
||||
# Extract additional metadata
|
||||
user_agent = request.META.get("HTTP_USER_AGENT", "")[
|
||||
:500
|
||||
] # Truncate long user agents
|
||||
referer = request.META.get("HTTP_REFERER", "")[:500]
|
||||
|
||||
PageView.objects.create(
|
||||
content_type=content_type,
|
||||
object_id=content_obj.pk,
|
||||
ip_address=client_ip,
|
||||
user_agent=user_agent,
|
||||
referer=referer,
|
||||
path=request.path[:500],
|
||||
)
|
||||
|
||||
# Update cache for deduplication
|
||||
cache_key = self._get_dedup_cache_key(content_obj, client_ip)
|
||||
cache.set(cache_key, True, timeout=self.dedup_window_hours * 3600)
|
||||
|
||||
def _get_dedup_cache_key(self, content_obj: ContentObject, client_ip: str) -> str:
|
||||
"""Generate cache key for deduplication."""
|
||||
content_type = ContentType.objects.get_for_model(content_obj)
|
||||
return f"pageview_dedup:{content_type.id}:{content_obj.pk}:{client_ip}"
|
||||
|
||||
def _get_client_ip(self, request: HttpRequest) -> Optional[str]:
|
||||
"""Extract client IP address from request."""
|
||||
# Check for forwarded IP (common in production with load balancers)
|
||||
x_forwarded_for = request.META.get("HTTP_X_FORWARDED_FOR")
|
||||
if x_forwarded_for:
|
||||
# Take the first IP in the chain (client IP)
|
||||
ip = x_forwarded_for.split(",")[0].strip()
|
||||
if self._is_valid_ip(ip):
|
||||
return ip
|
||||
|
||||
# Check for real IP header (some proxy configurations)
|
||||
x_real_ip = request.META.get("HTTP_X_REAL_IP")
|
||||
if x_real_ip and self._is_valid_ip(x_real_ip):
|
||||
return x_real_ip
|
||||
|
||||
# Fall back to remote address
|
||||
remote_addr = request.META.get("REMOTE_ADDR")
|
||||
if remote_addr and self._is_valid_ip(remote_addr):
|
||||
return remote_addr
|
||||
|
||||
return None
|
||||
|
||||
def _is_valid_ip(self, ip: str) -> bool:
|
||||
"""Validate IP address format."""
|
||||
try:
|
||||
# Basic validation - check if it looks like an IP
|
||||
parts = ip.split(".")
|
||||
if len(parts) != 4:
|
||||
return False
|
||||
|
||||
for part in parts:
|
||||
if not part.isdigit() or not 0 <= int(part) <= 255:
|
||||
return False
|
||||
|
||||
# Skip localhost and private IPs in production
|
||||
if getattr(settings, "SKIP_LOCAL_IPS", not settings.DEBUG):
|
||||
if ip.startswith(("127.", "192.168.", "10.")) or ip.startswith("172."):
|
||||
if any(
|
||||
16 <= int(ip.split(".")[1]) <= 31
|
||||
for _ in [ip]
|
||||
if ip.startswith("172.")
|
||||
):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
except (ValueError, IndexError):
|
||||
return False
|
||||
|
||||
|
||||
def get_view_stats_for_content(content_obj: ContentObject, hours: int = 24) -> dict:
|
||||
"""
|
||||
Helper function to get view statistics for content.
|
||||
|
||||
Args:
|
||||
content_obj: The content object (Park or Ride)
|
||||
hours: Time window in hours for stats
|
||||
|
||||
Returns:
|
||||
Dictionary with view statistics
|
||||
"""
|
||||
try:
|
||||
content_type = ContentType.objects.get_for_model(content_obj)
|
||||
cutoff_time = timezone.now() - timedelta(hours=hours)
|
||||
|
||||
total_views = PageView.objects.filter(
|
||||
content_type=content_type,
|
||||
object_id=content_obj.pk,
|
||||
timestamp__gte=cutoff_time,
|
||||
).count()
|
||||
|
||||
unique_views = (
|
||||
PageView.objects.filter(
|
||||
content_type=content_type,
|
||||
object_id=content_obj.pk,
|
||||
timestamp__gte=cutoff_time,
|
||||
)
|
||||
.values("ip_address")
|
||||
.distinct()
|
||||
.count()
|
||||
)
|
||||
|
||||
return {
|
||||
"total_views": total_views,
|
||||
"unique_views": unique_views,
|
||||
"hours": hours,
|
||||
"content_type": content_type.model,
|
||||
"content_id": content_obj.pk,
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting view stats: {e}")
|
||||
return {"total_views": 0, "unique_views": 0, "hours": hours, "error": str(e)}
|
||||
415
backend/apps/core/services/entity_fuzzy_matching.py
Normal file
415
backend/apps/core/services/entity_fuzzy_matching.py
Normal file
@@ -0,0 +1,415 @@
|
||||
"""
|
||||
Entity Fuzzy Matching Service for ThrillWiki
|
||||
|
||||
Provides intelligent entity matching when exact lookups fail, with authentication
|
||||
prompts for suggesting new entity creation.
|
||||
|
||||
Features:
|
||||
- Levenshtein distance for typo correction
|
||||
- Phonetic matching using Soundex algorithm
|
||||
- Partial name matching
|
||||
- Priority-based scoring (parks > rides > companies)
|
||||
- Authentication state-aware suggestions
|
||||
"""
|
||||
|
||||
import re
|
||||
from difflib import SequenceMatcher
|
||||
from typing import List, Dict, Any, Optional, Tuple
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
|
||||
from django.db.models import Q
|
||||
|
||||
from apps.parks.models import Park
|
||||
from apps.rides.models import Ride
|
||||
from apps.parks.models import Company
|
||||
|
||||
|
||||
class EntityType(Enum):
|
||||
"""Supported entity types for fuzzy matching."""
|
||||
|
||||
PARK = "park"
|
||||
RIDE = "ride"
|
||||
COMPANY = "company"
|
||||
|
||||
|
||||
@dataclass
|
||||
class FuzzyMatchResult:
|
||||
"""Result of a fuzzy matching operation."""
|
||||
|
||||
entity_type: EntityType
|
||||
entity: Any # The actual model instance
|
||||
name: str
|
||||
slug: str
|
||||
score: float # 0.0 to 1.0, higher is better match
|
||||
match_reason: str # Description of why this was matched
|
||||
confidence: str # 'high', 'medium', 'low'
|
||||
url: Optional[str] = None
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert to dictionary for API responses."""
|
||||
return {
|
||||
"entity_type": self.entity_type.value,
|
||||
"name": self.name,
|
||||
"slug": self.slug,
|
||||
"score": round(self.score, 3),
|
||||
"match_reason": self.match_reason,
|
||||
"confidence": self.confidence,
|
||||
"url": self.url,
|
||||
"entity_id": getattr(self.entity, "id", None),
|
||||
}
|
||||
|
||||
|
||||
@dataclass
|
||||
class EntitySuggestion:
|
||||
"""Suggestion for creating a new entity when no matches found."""
|
||||
|
||||
suggested_name: str
|
||||
entity_type: EntityType
|
||||
requires_authentication: bool
|
||||
login_prompt: str
|
||||
signup_prompt: str
|
||||
creation_hint: str
|
||||
|
||||
|
||||
class FuzzyMatchingAlgorithms:
|
||||
"""Collection of fuzzy matching algorithms."""
|
||||
|
||||
@staticmethod
|
||||
def levenshtein_distance(s1: str, s2: str) -> int:
|
||||
"""Calculate Levenshtein distance between two strings."""
|
||||
if len(s1) < len(s2):
|
||||
return FuzzyMatchingAlgorithms.levenshtein_distance(s2, s1)
|
||||
|
||||
if len(s2) == 0:
|
||||
return len(s1)
|
||||
|
||||
previous_row = list(range(len(s2) + 1))
|
||||
for i, c1 in enumerate(s1):
|
||||
current_row = [i + 1]
|
||||
for j, c2 in enumerate(s2):
|
||||
insertions = previous_row[j + 1] + 1
|
||||
deletions = current_row[j] + 1
|
||||
substitutions = previous_row[j] + (c1 != c2)
|
||||
current_row.append(min(insertions, deletions, substitutions))
|
||||
previous_row = current_row
|
||||
|
||||
return previous_row[-1]
|
||||
|
||||
@staticmethod
|
||||
def similarity_ratio(s1: str, s2: str) -> float:
|
||||
"""Calculate similarity ratio (0.0 to 1.0) using SequenceMatcher."""
|
||||
return SequenceMatcher(None, s1.lower(), s2.lower()).ratio()
|
||||
|
||||
@staticmethod
|
||||
def soundex(name: str) -> str:
|
||||
"""Generate Soundex code for phonetic matching."""
|
||||
name = re.sub(r"[^A-Za-z]", "", name.upper())
|
||||
if not name:
|
||||
return "0000"
|
||||
|
||||
# Soundex algorithm
|
||||
soundex_map = {
|
||||
"BFPV": "1",
|
||||
"CGJKQSXZ": "2",
|
||||
"DT": "3",
|
||||
"L": "4",
|
||||
"MN": "5",
|
||||
"R": "6",
|
||||
}
|
||||
|
||||
first_letter = name[0]
|
||||
name = name[1:]
|
||||
|
||||
# Replace letters with numbers
|
||||
for letters, number in soundex_map.items():
|
||||
name = re.sub(f"[{letters}]", number, name)
|
||||
|
||||
# Remove consecutive duplicates
|
||||
name = re.sub(r"(\d)\1+", r"\1", name)
|
||||
|
||||
# Remove zeros
|
||||
name = re.sub("0", "", name)
|
||||
|
||||
# Pad or truncate to 4 characters
|
||||
soundex_code = (first_letter + name + "000")[:4]
|
||||
return soundex_code
|
||||
|
||||
@staticmethod
|
||||
def partial_match_score(query: str, target: str) -> float:
|
||||
"""Calculate partial matching score for substring matches."""
|
||||
query_lower = query.lower()
|
||||
target_lower = target.lower()
|
||||
|
||||
# Exact match
|
||||
if query_lower == target_lower:
|
||||
return 1.0
|
||||
|
||||
# Starts with
|
||||
if target_lower.startswith(query_lower):
|
||||
return 0.8 + (len(query) / len(target)) * 0.15
|
||||
|
||||
# Contains
|
||||
if query_lower in target_lower:
|
||||
return 0.6 + (len(query) / len(target)) * 0.2
|
||||
|
||||
# Words match
|
||||
query_words = set(query_lower.split())
|
||||
target_words = set(target_lower.split())
|
||||
if query_words & target_words:
|
||||
intersection = len(query_words & target_words)
|
||||
union = len(query_words | target_words)
|
||||
return 0.4 + (intersection / union) * 0.3
|
||||
|
||||
return 0.0
|
||||
|
||||
|
||||
class EntityFuzzyMatcher:
|
||||
"""Main fuzzy matching service for entities."""
|
||||
|
||||
# Matching thresholds
|
||||
HIGH_CONFIDENCE_THRESHOLD = 0.8
|
||||
MEDIUM_CONFIDENCE_THRESHOLD = 0.6
|
||||
LOW_CONFIDENCE_THRESHOLD = 0.4
|
||||
|
||||
# Maximum results to consider
|
||||
MAX_CANDIDATES = 50
|
||||
MAX_RESULTS = 5
|
||||
|
||||
def __init__(self):
|
||||
self.algorithms = FuzzyMatchingAlgorithms()
|
||||
|
||||
def find_entity(
|
||||
self, query: str, entity_types: Optional[List[EntityType]] = None, user=None
|
||||
) -> Tuple[List[FuzzyMatchResult], Optional[EntitySuggestion]]:
|
||||
"""
|
||||
Find entities matching the query with fuzzy matching.
|
||||
|
||||
Args:
|
||||
query: Search query string
|
||||
entity_types: Limit search to specific entity types
|
||||
user: Current user for authentication context
|
||||
|
||||
Returns:
|
||||
Tuple of (matches, suggestion_for_new_entity)
|
||||
"""
|
||||
if not query or len(query.strip()) < 2:
|
||||
return [], None
|
||||
|
||||
query = query.strip()
|
||||
entity_types = entity_types or [
|
||||
EntityType.PARK,
|
||||
EntityType.RIDE,
|
||||
EntityType.COMPANY,
|
||||
]
|
||||
|
||||
# Collect all potential matches
|
||||
candidates = []
|
||||
|
||||
for entity_type in entity_types:
|
||||
candidates.extend(self._get_candidates(query, entity_type))
|
||||
|
||||
# Score and rank candidates
|
||||
matches = self._score_and_rank_candidates(query, candidates)
|
||||
|
||||
# Generate suggestion if no good matches found
|
||||
suggestion = None
|
||||
if not matches or matches[0].score < self.LOW_CONFIDENCE_THRESHOLD:
|
||||
suggestion = self._generate_entity_suggestion(query, entity_types, user)
|
||||
|
||||
return matches[: self.MAX_RESULTS], suggestion
|
||||
|
||||
def _get_candidates(
|
||||
self, query: str, entity_type: EntityType
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Get potential matching candidates for an entity type."""
|
||||
candidates = []
|
||||
|
||||
if entity_type == EntityType.PARK:
|
||||
parks = Park.objects.filter(
|
||||
Q(name__icontains=query)
|
||||
| Q(slug__icontains=query.lower().replace(" ", "-"))
|
||||
| Q(former_names__icontains=query)
|
||||
)[: self.MAX_CANDIDATES]
|
||||
|
||||
for park in parks:
|
||||
candidates.append(
|
||||
{
|
||||
"entity_type": EntityType.PARK,
|
||||
"entity": park,
|
||||
"name": park.name,
|
||||
"slug": park.slug,
|
||||
"search_names": [park.name],
|
||||
"url": getattr(park, "get_absolute_url", lambda: None)(),
|
||||
"priority_boost": 0.1, # Parks get priority
|
||||
}
|
||||
)
|
||||
|
||||
elif entity_type == EntityType.RIDE:
|
||||
rides = Ride.objects.select_related("park").filter(
|
||||
Q(name__icontains=query)
|
||||
| Q(slug__icontains=query.lower().replace(" ", "-"))
|
||||
| Q(former_names__icontains=query)
|
||||
| Q(park__name__icontains=query)
|
||||
)[: self.MAX_CANDIDATES]
|
||||
|
||||
for ride in rides:
|
||||
candidates.append(
|
||||
{
|
||||
"entity_type": EntityType.RIDE,
|
||||
"entity": ride,
|
||||
"name": ride.name,
|
||||
"slug": ride.slug,
|
||||
"search_names": [ride.name, f"{ride.park.name} {ride.name}"],
|
||||
"url": getattr(ride, "get_absolute_url", lambda: None)(),
|
||||
"priority_boost": 0.05, # Rides get some priority
|
||||
}
|
||||
)
|
||||
|
||||
elif entity_type == EntityType.COMPANY:
|
||||
companies = Company.objects.filter(
|
||||
Q(name__icontains=query)
|
||||
| Q(slug__icontains=query.lower().replace(" ", "-"))
|
||||
)[: self.MAX_CANDIDATES]
|
||||
|
||||
for company in companies:
|
||||
candidates.append(
|
||||
{
|
||||
"entity_type": EntityType.COMPANY,
|
||||
"entity": company,
|
||||
"name": company.name,
|
||||
"slug": company.slug,
|
||||
"search_names": [company.name],
|
||||
"url": getattr(company, "get_absolute_url", lambda: None)(),
|
||||
"priority_boost": 0.0, # Companies get no priority boost
|
||||
}
|
||||
)
|
||||
|
||||
return candidates
|
||||
|
||||
def _score_and_rank_candidates(
|
||||
self, query: str, candidates: List[Dict[str, Any]]
|
||||
) -> List[FuzzyMatchResult]:
|
||||
"""Score and rank all candidates using multiple algorithms."""
|
||||
scored_matches = []
|
||||
|
||||
for candidate in candidates:
|
||||
best_score = 0.0
|
||||
best_reason = ""
|
||||
|
||||
# Test against all search names for this candidate
|
||||
for search_name in candidate["search_names"]:
|
||||
# Algorithm 1: Sequence similarity
|
||||
similarity_score = self.algorithms.similarity_ratio(query, search_name)
|
||||
if similarity_score > best_score:
|
||||
best_score = similarity_score
|
||||
best_reason = f"Text similarity with '{search_name}'"
|
||||
|
||||
# Algorithm 2: Partial matching
|
||||
partial_score = self.algorithms.partial_match_score(query, search_name)
|
||||
if partial_score > best_score:
|
||||
best_score = partial_score
|
||||
best_reason = f"Partial match with '{search_name}'"
|
||||
|
||||
# Algorithm 3: Levenshtein distance
|
||||
if len(query) > 3 and len(search_name) > 3:
|
||||
max_len = max(len(query), len(search_name))
|
||||
distance = self.algorithms.levenshtein_distance(query, search_name)
|
||||
lev_score = 1.0 - (distance / max_len)
|
||||
if lev_score > best_score:
|
||||
best_score = lev_score
|
||||
best_reason = f"Similar spelling to '{search_name}'"
|
||||
|
||||
# Algorithm 4: Soundex phonetic matching
|
||||
if len(query) > 2 and len(search_name) > 2:
|
||||
query_soundex = self.algorithms.soundex(query)
|
||||
name_soundex = self.algorithms.soundex(search_name)
|
||||
if query_soundex == name_soundex and best_score < 0.7:
|
||||
best_score = max(best_score, 0.7)
|
||||
best_reason = f"Sounds like '{search_name}'"
|
||||
|
||||
# Apply priority boost
|
||||
best_score += candidate["priority_boost"]
|
||||
best_score = min(1.0, best_score) # Cap at 1.0
|
||||
|
||||
# Determine confidence level
|
||||
if best_score >= self.HIGH_CONFIDENCE_THRESHOLD:
|
||||
confidence = "high"
|
||||
elif best_score >= self.MEDIUM_CONFIDENCE_THRESHOLD:
|
||||
confidence = "medium"
|
||||
else:
|
||||
confidence = "low"
|
||||
|
||||
# Only include if above minimum threshold
|
||||
if best_score >= self.LOW_CONFIDENCE_THRESHOLD:
|
||||
match = FuzzyMatchResult(
|
||||
entity_type=candidate["entity_type"],
|
||||
entity=candidate["entity"],
|
||||
name=candidate["name"],
|
||||
slug=candidate["slug"],
|
||||
score=best_score,
|
||||
match_reason=best_reason,
|
||||
confidence=confidence,
|
||||
url=candidate["url"],
|
||||
)
|
||||
scored_matches.append(match)
|
||||
|
||||
# Sort by score (highest first) and return
|
||||
return sorted(scored_matches, key=lambda x: x.score, reverse=True)
|
||||
|
||||
def _generate_entity_suggestion(
|
||||
self, query: str, entity_types: List[EntityType], user
|
||||
) -> EntitySuggestion:
|
||||
"""Generate suggestion for creating new entity when no matches found."""
|
||||
|
||||
# Determine most likely entity type based on query characteristics
|
||||
suggested_type = EntityType.PARK # Default to park
|
||||
|
||||
# Simple heuristics for entity type detection
|
||||
query_lower = query.lower()
|
||||
if any(
|
||||
word in query_lower
|
||||
for word in ["roller coaster", "ride", "coaster", "attraction"]
|
||||
):
|
||||
suggested_type = EntityType.RIDE
|
||||
elif any(
|
||||
word in query_lower for word in ["inc", "corp", "company", "manufacturer"]
|
||||
):
|
||||
suggested_type = EntityType.COMPANY
|
||||
elif EntityType.PARK in entity_types:
|
||||
suggested_type = EntityType.PARK
|
||||
elif entity_types:
|
||||
suggested_type = entity_types[0]
|
||||
|
||||
# Clean up the suggested name
|
||||
suggested_name = " ".join(word.capitalize() for word in query.split())
|
||||
|
||||
# Check if user is authenticated
|
||||
is_authenticated = (
|
||||
user and hasattr(user, "is_authenticated") and user.is_authenticated
|
||||
)
|
||||
|
||||
# Generate appropriate prompts
|
||||
entity_name = suggested_type.value
|
||||
login_prompt = (
|
||||
f"Log in to suggest adding '{suggested_name}' as a new {entity_name}"
|
||||
)
|
||||
signup_prompt = (
|
||||
f"Sign up to contribute and add '{suggested_name}' to ThrillWiki"
|
||||
)
|
||||
creation_hint = (
|
||||
f"Help expand ThrillWiki by adding information about '{suggested_name}'"
|
||||
)
|
||||
|
||||
return EntitySuggestion(
|
||||
suggested_name=suggested_name,
|
||||
entity_type=suggested_type,
|
||||
requires_authentication=not is_authenticated,
|
||||
login_prompt=login_prompt,
|
||||
signup_prompt=signup_prompt,
|
||||
creation_hint=creation_hint,
|
||||
)
|
||||
|
||||
|
||||
# Global service instance
|
||||
entity_fuzzy_matcher = EntityFuzzyMatcher()
|
||||
594
backend/apps/core/services/trending_service.py
Normal file
594
backend/apps/core/services/trending_service.py
Normal file
@@ -0,0 +1,594 @@
|
||||
"""
|
||||
Trending Service for calculating and caching trending content.
|
||||
|
||||
This service implements the weighted trending algorithm that combines:
|
||||
- View growth rates
|
||||
- Content ratings
|
||||
- Recency factors
|
||||
- Popularity metrics
|
||||
|
||||
Results are cached in Redis for performance optimization.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, List, Any
|
||||
from django.utils import timezone
|
||||
from django.contrib.contenttypes.models import ContentType
|
||||
from django.core.cache import cache
|
||||
from django.db.models import Q
|
||||
|
||||
from apps.core.analytics import PageView
|
||||
from apps.parks.models import Park
|
||||
from apps.rides.models import Ride
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class TrendingService:
|
||||
"""
|
||||
Service for calculating trending content using weighted algorithm.
|
||||
|
||||
Algorithm Components:
|
||||
- View Growth Rate (40% weight): Recent view increase vs historical
|
||||
- Rating Score (30% weight): Average user rating normalized
|
||||
- Recency Factor (20% weight): How recently content was added/updated
|
||||
- Popularity Boost (10% weight): Total view count normalization
|
||||
"""
|
||||
|
||||
# Algorithm weights (must sum to 1.0)
|
||||
WEIGHT_VIEW_GROWTH = 0.4
|
||||
WEIGHT_RATING = 0.3
|
||||
WEIGHT_RECENCY = 0.2
|
||||
WEIGHT_POPULARITY = 0.1
|
||||
|
||||
# Cache configuration
|
||||
CACHE_PREFIX = "trending"
|
||||
CACHE_TTL = 86400 # 24 hours (daily refresh)
|
||||
|
||||
# Time windows for calculations
|
||||
CURRENT_PERIOD_HOURS = 168 # 7 days
|
||||
PREVIOUS_PERIOD_HOURS = 336 # 14 days (for previous 7-day window comparison)
|
||||
RECENCY_BASELINE_DAYS = 365
|
||||
|
||||
def __init__(self):
|
||||
self.logger = logging.getLogger(f"{__name__}.{self.__class__.__name__}")
|
||||
|
||||
def get_trending_content(
|
||||
self, content_type: str = "all", limit: int = 20, force_refresh: bool = False
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Get trending content with caching.
|
||||
|
||||
Args:
|
||||
content_type: 'parks', 'rides', or 'all'
|
||||
limit: Maximum number of results
|
||||
force_refresh: Skip cache and recalculate
|
||||
|
||||
Returns:
|
||||
List of trending content with exact frontend format
|
||||
"""
|
||||
cache_key = f"{self.CACHE_PREFIX}:trending:{content_type}:{limit}"
|
||||
|
||||
if not force_refresh:
|
||||
cached_result = cache.get(cache_key)
|
||||
if cached_result is not None:
|
||||
self.logger.debug(
|
||||
f"Returning cached trending results for {content_type}"
|
||||
)
|
||||
return cached_result
|
||||
|
||||
self.logger.info(f"Calculating trending content for {content_type}")
|
||||
|
||||
try:
|
||||
# Calculate trending scores for each content type
|
||||
trending_items = []
|
||||
|
||||
if content_type in ["all", "parks"]:
|
||||
park_items = self._calculate_trending_parks(
|
||||
limit if content_type == "parks" else limit * 2
|
||||
)
|
||||
trending_items.extend(park_items)
|
||||
|
||||
if content_type in ["all", "rides"]:
|
||||
ride_items = self._calculate_trending_rides(
|
||||
limit if content_type == "rides" else limit * 2
|
||||
)
|
||||
trending_items.extend(ride_items)
|
||||
|
||||
# Sort by trending score and apply limit
|
||||
trending_items.sort(key=lambda x: x.get("trending_score", 0), reverse=True)
|
||||
trending_items = trending_items[:limit]
|
||||
|
||||
# Add ranking and format for frontend
|
||||
formatted_results = self._format_trending_results(trending_items)
|
||||
|
||||
# Cache results
|
||||
cache.set(cache_key, formatted_results, self.CACHE_TTL)
|
||||
|
||||
self.logger.info(
|
||||
f"Calculated {len(formatted_results)} trending items for {content_type}"
|
||||
)
|
||||
return formatted_results
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error calculating trending content: {e}", exc_info=True)
|
||||
return []
|
||||
|
||||
def get_new_content(
|
||||
self,
|
||||
content_type: str = "all",
|
||||
limit: int = 20,
|
||||
days_back: int = 30,
|
||||
force_refresh: bool = False,
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Get recently added content.
|
||||
|
||||
Args:
|
||||
content_type: 'parks', 'rides', or 'all'
|
||||
limit: Maximum number of results
|
||||
days_back: How many days to look back
|
||||
force_refresh: Skip cache and recalculate
|
||||
|
||||
Returns:
|
||||
List of new content with exact frontend format
|
||||
"""
|
||||
cache_key = f"{self.CACHE_PREFIX}:new:{content_type}:{limit}:{days_back}"
|
||||
|
||||
if not force_refresh:
|
||||
cached_result = cache.get(cache_key)
|
||||
if cached_result is not None:
|
||||
self.logger.debug(
|
||||
f"Returning cached new content results for {content_type}"
|
||||
)
|
||||
return cached_result
|
||||
|
||||
self.logger.info(f"Calculating new content for {content_type}")
|
||||
|
||||
try:
|
||||
cutoff_date = timezone.now() - timedelta(days=days_back)
|
||||
new_items = []
|
||||
|
||||
if content_type in ["all", "parks"]:
|
||||
parks = self._get_new_parks(
|
||||
cutoff_date, limit if content_type == "parks" else limit * 2
|
||||
)
|
||||
new_items.extend(parks)
|
||||
|
||||
if content_type in ["all", "rides"]:
|
||||
rides = self._get_new_rides(
|
||||
cutoff_date, limit if content_type == "rides" else limit * 2
|
||||
)
|
||||
new_items.extend(rides)
|
||||
|
||||
# Sort by date added (most recent first) and apply limit
|
||||
new_items.sort(key=lambda x: x.get("date_added", ""), reverse=True)
|
||||
new_items = new_items[:limit]
|
||||
|
||||
# Format for frontend
|
||||
formatted_results = self._format_new_content_results(new_items)
|
||||
|
||||
# Cache results
|
||||
cache.set(cache_key, formatted_results, self.CACHE_TTL)
|
||||
|
||||
self.logger.info(
|
||||
f"Found {len(formatted_results)} new items for {content_type}"
|
||||
)
|
||||
return formatted_results
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error getting new content: {e}", exc_info=True)
|
||||
return []
|
||||
|
||||
def _calculate_trending_parks(self, limit: int) -> List[Dict[str, Any]]:
|
||||
"""Calculate trending scores for parks."""
|
||||
parks = Park.objects.filter(status="OPERATING").select_related(
|
||||
"location", "operator"
|
||||
)
|
||||
|
||||
trending_parks = []
|
||||
|
||||
for park in parks:
|
||||
try:
|
||||
score = self._calculate_content_score(park, "park")
|
||||
if score > 0: # Only include items with positive trending scores
|
||||
trending_parks.append(
|
||||
{
|
||||
"content_object": park,
|
||||
"content_type": "park",
|
||||
"trending_score": score,
|
||||
"id": park.id,
|
||||
"name": park.name,
|
||||
"slug": park.slug,
|
||||
"location": (
|
||||
park.formatted_location
|
||||
if hasattr(park, "location")
|
||||
else ""
|
||||
),
|
||||
"category": "park",
|
||||
"rating": (
|
||||
float(park.average_rating)
|
||||
if park.average_rating
|
||||
else 0.0
|
||||
),
|
||||
}
|
||||
)
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Error calculating score for park {park.id}: {e}")
|
||||
|
||||
return trending_parks
|
||||
|
||||
def _calculate_trending_rides(self, limit: int) -> List[Dict[str, Any]]:
|
||||
"""Calculate trending scores for rides."""
|
||||
rides = Ride.objects.filter(status="OPERATING").select_related(
|
||||
"park", "park__location"
|
||||
)
|
||||
|
||||
trending_rides = []
|
||||
|
||||
for ride in rides:
|
||||
try:
|
||||
score = self._calculate_content_score(ride, "ride")
|
||||
if score > 0: # Only include items with positive trending scores
|
||||
# Get location from park (rides don't have direct location field)
|
||||
location = ""
|
||||
if (
|
||||
ride.park
|
||||
and hasattr(ride.park, "location")
|
||||
and ride.park.location
|
||||
):
|
||||
location = ride.park.formatted_location
|
||||
|
||||
trending_rides.append(
|
||||
{
|
||||
"content_object": ride,
|
||||
"content_type": "ride",
|
||||
"trending_score": score,
|
||||
"id": ride.pk, # Use pk instead of id
|
||||
"name": ride.name,
|
||||
"slug": ride.slug,
|
||||
"location": location,
|
||||
"category": "ride",
|
||||
"rating": (
|
||||
float(ride.average_rating)
|
||||
if ride.average_rating
|
||||
else 0.0
|
||||
),
|
||||
}
|
||||
)
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Error calculating score for ride {ride.pk}: {e}")
|
||||
|
||||
return trending_rides
|
||||
|
||||
def _calculate_content_score(self, content_obj: Any, content_type: str) -> float:
|
||||
"""
|
||||
Calculate weighted trending score for content object.
|
||||
|
||||
Returns:
|
||||
Float between 0.0 and 1.0 representing trending strength
|
||||
"""
|
||||
try:
|
||||
# Get content type for PageView queries
|
||||
ct = ContentType.objects.get_for_model(content_obj)
|
||||
|
||||
# 1. View Growth Score (40% weight)
|
||||
view_growth_score = self._calculate_view_growth_score(ct, content_obj.id)
|
||||
|
||||
# 2. Rating Score (30% weight)
|
||||
rating_score = self._calculate_rating_score(content_obj)
|
||||
|
||||
# 3. Recency Score (20% weight)
|
||||
recency_score = self._calculate_recency_score(content_obj)
|
||||
|
||||
# 4. Popularity Score (10% weight)
|
||||
popularity_score = self._calculate_popularity_score(ct, content_obj.id)
|
||||
|
||||
# Calculate weighted final score
|
||||
final_score = (
|
||||
view_growth_score * self.WEIGHT_VIEW_GROWTH
|
||||
+ rating_score * self.WEIGHT_RATING
|
||||
+ recency_score * self.WEIGHT_RECENCY
|
||||
+ popularity_score * self.WEIGHT_POPULARITY
|
||||
)
|
||||
|
||||
self.logger.debug(
|
||||
f"{content_type} {content_obj.id}: "
|
||||
f"growth={view_growth_score:.3f}, rating={rating_score:.3f}, "
|
||||
f"recency={recency_score:.3f}, popularity={popularity_score:.3f}, "
|
||||
f"final={final_score:.3f}"
|
||||
)
|
||||
|
||||
return final_score
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(
|
||||
f"Error calculating score for {content_type} {content_obj.id}: {e}"
|
||||
)
|
||||
return 0.0
|
||||
|
||||
def _calculate_view_growth_score(
|
||||
self, content_type: ContentType, object_id: int
|
||||
) -> float:
|
||||
"""Calculate normalized view growth score."""
|
||||
try:
|
||||
current_views, previous_views, growth_percentage = (
|
||||
PageView.get_views_growth(
|
||||
content_type,
|
||||
object_id,
|
||||
self.CURRENT_PERIOD_HOURS,
|
||||
self.PREVIOUS_PERIOD_HOURS,
|
||||
)
|
||||
)
|
||||
|
||||
if previous_views == 0:
|
||||
# New content with views gets boost
|
||||
return min(current_views / 100.0, 1.0) if current_views > 0 else 0.0
|
||||
|
||||
# Normalize growth percentage to 0-1 scale
|
||||
# 100% growth = 0.5, 500% growth = 1.0
|
||||
normalized_growth = (
|
||||
min(growth_percentage / 500.0, 1.0) if growth_percentage > 0 else 0.0
|
||||
)
|
||||
return max(normalized_growth, 0.0)
|
||||
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Error calculating view growth: {e}")
|
||||
return 0.0
|
||||
|
||||
def _calculate_rating_score(self, content_obj: Any) -> float:
|
||||
"""Calculate normalized rating score."""
|
||||
try:
|
||||
rating = getattr(content_obj, "average_rating", None)
|
||||
if rating is None or rating == 0:
|
||||
return 0.3 # Neutral score for unrated content
|
||||
|
||||
# Normalize rating from 1-10 scale to 0-1 scale
|
||||
# Rating of 5 = 0.4, Rating of 8 = 0.7, Rating of 10 = 1.0
|
||||
return min(max((float(rating) - 1) / 9.0, 0.0), 1.0)
|
||||
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Error calculating rating score: {e}")
|
||||
return 0.3
|
||||
|
||||
def _calculate_recency_score(self, content_obj: Any) -> float:
|
||||
"""Calculate recency score based on when content was added/updated."""
|
||||
try:
|
||||
# Use opening_date for parks/rides, or created_at as fallback
|
||||
date_added = getattr(content_obj, "opening_date", None)
|
||||
if not date_added:
|
||||
date_added = getattr(content_obj, "created_at", None)
|
||||
if not date_added:
|
||||
return 0.5 # Neutral score for unknown dates
|
||||
|
||||
# Handle both date and datetime objects
|
||||
if hasattr(date_added, "date"):
|
||||
date_added = date_added.date()
|
||||
|
||||
# Calculate days since added
|
||||
today = timezone.now().date()
|
||||
days_since_added = (today - date_added).days
|
||||
|
||||
# Recency score: newer content gets higher scores
|
||||
# 0 days = 1.0, 30 days = 0.8, 365 days = 0.1, >365 days = 0.0
|
||||
if days_since_added <= 0:
|
||||
return 1.0
|
||||
elif days_since_added <= 30:
|
||||
return 1.0 - (days_since_added / 30.0) * 0.2 # 1.0 to 0.8
|
||||
elif days_since_added <= self.RECENCY_BASELINE_DAYS:
|
||||
return (
|
||||
0.8
|
||||
- ((days_since_added - 30) / (self.RECENCY_BASELINE_DAYS - 30))
|
||||
* 0.7
|
||||
) # 0.8 to 0.1
|
||||
else:
|
||||
return 0.0
|
||||
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Error calculating recency score: {e}")
|
||||
return 0.5
|
||||
|
||||
def _calculate_popularity_score(
|
||||
self, content_type: ContentType, object_id: int
|
||||
) -> float:
|
||||
"""Calculate popularity score based on total view count."""
|
||||
try:
|
||||
total_views = PageView.get_total_views_count(
|
||||
content_type, object_id, hours=168 # Last 7 days
|
||||
)
|
||||
|
||||
# Normalize views to 0-1 scale
|
||||
# 0 views = 0.0, 100 views = 0.5, 1000+ views = 1.0
|
||||
if total_views == 0:
|
||||
return 0.0
|
||||
elif total_views <= 100:
|
||||
return total_views / 200.0 # 0.0 to 0.5
|
||||
else:
|
||||
return min(0.5 + (total_views - 100) / 1800.0, 1.0) # 0.5 to 1.0
|
||||
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Error calculating popularity score: {e}")
|
||||
return 0.0
|
||||
|
||||
def _get_new_parks(self, cutoff_date: datetime, limit: int) -> List[Dict[str, Any]]:
|
||||
"""Get recently added parks."""
|
||||
new_parks = (
|
||||
Park.objects.filter(
|
||||
Q(created_at__gte=cutoff_date)
|
||||
| Q(opening_date__gte=cutoff_date.date()),
|
||||
status="OPERATING",
|
||||
)
|
||||
.select_related("location", "operator")
|
||||
.order_by("-created_at", "-opening_date")[:limit]
|
||||
)
|
||||
|
||||
results = []
|
||||
for park in new_parks:
|
||||
date_added = park.opening_date or park.created_at
|
||||
# Handle datetime to date conversion
|
||||
if date_added:
|
||||
# If it's a datetime, convert to date
|
||||
if isinstance(date_added, datetime):
|
||||
date_added = date_added.date()
|
||||
# If it's already a date, keep it as is
|
||||
|
||||
results.append(
|
||||
{
|
||||
"content_object": park,
|
||||
"content_type": "park",
|
||||
"id": park.pk, # Use pk instead of id for Django compatibility
|
||||
"name": park.name,
|
||||
"slug": park.slug,
|
||||
"location": (
|
||||
park.formatted_location if hasattr(park, "location") else ""
|
||||
),
|
||||
"category": "park",
|
||||
"date_added": date_added.isoformat() if date_added else "",
|
||||
}
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
def _get_new_rides(self, cutoff_date: datetime, limit: int) -> List[Dict[str, Any]]:
|
||||
"""Get recently added rides."""
|
||||
new_rides = (
|
||||
Ride.objects.filter(
|
||||
Q(created_at__gte=cutoff_date)
|
||||
| Q(opening_date__gte=cutoff_date.date()),
|
||||
status="OPERATING",
|
||||
)
|
||||
.select_related("park", "park__location")
|
||||
.order_by("-created_at", "-opening_date")[:limit]
|
||||
)
|
||||
|
||||
results = []
|
||||
for ride in new_rides:
|
||||
date_added = getattr(ride, "opening_date", None) or getattr(
|
||||
ride, "created_at", None
|
||||
)
|
||||
# Handle datetime to date conversion
|
||||
if date_added:
|
||||
# If it's a datetime, convert to date
|
||||
if isinstance(date_added, datetime):
|
||||
date_added = date_added.date()
|
||||
# If it's already a date, keep it as is
|
||||
|
||||
# Get location from park (rides don't have direct location field)
|
||||
location = ""
|
||||
if ride.park and hasattr(ride.park, "location") and ride.park.location:
|
||||
location = ride.park.formatted_location
|
||||
|
||||
results.append(
|
||||
{
|
||||
"content_object": ride,
|
||||
"content_type": "ride",
|
||||
"id": ride.pk, # Use pk instead of id for Django compatibility
|
||||
"name": ride.name,
|
||||
"slug": ride.slug,
|
||||
"location": location,
|
||||
"category": "ride",
|
||||
"date_added": date_added.isoformat() if date_added else "",
|
||||
}
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
def _format_trending_results(
|
||||
self, trending_items: List[Dict[str, Any]]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Format trending results for frontend consumption."""
|
||||
formatted_results = []
|
||||
|
||||
for rank, item in enumerate(trending_items, 1):
|
||||
try:
|
||||
# Get view change for display
|
||||
content_obj = item["content_object"]
|
||||
ct = ContentType.objects.get_for_model(content_obj)
|
||||
current_views, previous_views, growth_percentage = (
|
||||
PageView.get_views_growth(
|
||||
ct,
|
||||
content_obj.id,
|
||||
self.CURRENT_PERIOD_HOURS,
|
||||
self.PREVIOUS_PERIOD_HOURS,
|
||||
)
|
||||
)
|
||||
|
||||
# Format exactly as frontend expects
|
||||
formatted_item = {
|
||||
"id": item["id"],
|
||||
"name": item["name"],
|
||||
"location": item["location"],
|
||||
"category": item["category"],
|
||||
"rating": item["rating"],
|
||||
"rank": rank,
|
||||
"views": current_views,
|
||||
"views_change": (
|
||||
f"+{growth_percentage:.1f}%"
|
||||
if growth_percentage > 0
|
||||
else f"{growth_percentage:.1f}%"
|
||||
),
|
||||
"slug": item["slug"],
|
||||
}
|
||||
|
||||
formatted_results.append(formatted_item)
|
||||
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Error formatting trending item: {e}")
|
||||
|
||||
return formatted_results
|
||||
|
||||
def _format_new_content_results(
|
||||
self, new_items: List[Dict[str, Any]]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Format new content results for frontend consumption."""
|
||||
formatted_results = []
|
||||
|
||||
for item in new_items:
|
||||
try:
|
||||
# Format exactly as frontend expects
|
||||
formatted_item = {
|
||||
"id": item["id"],
|
||||
"name": item["name"],
|
||||
"location": item["location"],
|
||||
"category": item["category"],
|
||||
"date_added": item["date_added"],
|
||||
"slug": item["slug"],
|
||||
}
|
||||
|
||||
formatted_results.append(formatted_item)
|
||||
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Error formatting new content item: {e}")
|
||||
|
||||
return formatted_results
|
||||
|
||||
def clear_cache(self, content_type: str = "all") -> None:
|
||||
"""Clear trending and new content caches."""
|
||||
try:
|
||||
cache_patterns = [
|
||||
f"{self.CACHE_PREFIX}:trending:{content_type}:*",
|
||||
f"{self.CACHE_PREFIX}:new:{content_type}:*",
|
||||
]
|
||||
|
||||
if content_type == "all":
|
||||
cache_patterns.extend(
|
||||
[
|
||||
f"{self.CACHE_PREFIX}:trending:parks:*",
|
||||
f"{self.CACHE_PREFIX}:trending:rides:*",
|
||||
f"{self.CACHE_PREFIX}:new:parks:*",
|
||||
f"{self.CACHE_PREFIX}:new:rides:*",
|
||||
]
|
||||
)
|
||||
|
||||
# Note: This is a simplified cache clear
|
||||
# In production, you might want to use cache.delete_many() or similar
|
||||
cache.clear()
|
||||
self.logger.info(f"Cleared trending caches for {content_type}")
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error clearing cache: {e}")
|
||||
|
||||
|
||||
# Singleton service instance
|
||||
trending_service = TrendingService()
|
||||
24
backend/apps/core/urls.py
Normal file
24
backend/apps/core/urls.py
Normal file
@@ -0,0 +1,24 @@
|
||||
"""
|
||||
Core app URL configuration.
|
||||
"""
|
||||
|
||||
from django.urls import path, include
|
||||
from .views.entity_search import (
|
||||
EntityFuzzySearchView,
|
||||
EntityNotFoundView,
|
||||
QuickEntitySuggestionView,
|
||||
)
|
||||
|
||||
app_name = 'core'
|
||||
|
||||
# Entity search endpoints
|
||||
entity_patterns = [
|
||||
path('search/', EntityFuzzySearchView.as_view(), name='entity_fuzzy_search'),
|
||||
path('not-found/', EntityNotFoundView.as_view(), name='entity_not_found'),
|
||||
path('suggestions/', QuickEntitySuggestionView.as_view(), name='entity_suggestions'),
|
||||
]
|
||||
|
||||
urlpatterns = [
|
||||
# Entity fuzzy matching and search endpoints
|
||||
path('entities/', include(entity_patterns)),
|
||||
]
|
||||
1
backend/apps/core/urls/__init__.py
Normal file
1
backend/apps/core/urls/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# URLs package for core app
|
||||
347
backend/apps/core/views/entity_search.py
Normal file
347
backend/apps/core/views/entity_search.py
Normal file
@@ -0,0 +1,347 @@
|
||||
"""
|
||||
Entity search views with fuzzy matching and authentication prompts.
|
||||
"""
|
||||
|
||||
from rest_framework.views import APIView
|
||||
from rest_framework.response import Response
|
||||
from rest_framework import status
|
||||
from rest_framework.permissions import AllowAny
|
||||
from django.views.decorators.csrf import csrf_exempt
|
||||
from django.utils.decorators import method_decorator
|
||||
from typing import Optional, List
|
||||
|
||||
from ..services.entity_fuzzy_matching import (
|
||||
entity_fuzzy_matcher,
|
||||
EntityType,
|
||||
)
|
||||
|
||||
|
||||
class EntityFuzzySearchView(APIView):
|
||||
"""
|
||||
API endpoint for fuzzy entity search with authentication prompts.
|
||||
|
||||
Handles entity lookup failures by providing intelligent suggestions and
|
||||
authentication prompts for entity creation.
|
||||
"""
|
||||
|
||||
permission_classes = [AllowAny] # Allow both authenticated and anonymous users
|
||||
|
||||
def post(self, request):
|
||||
"""
|
||||
Perform fuzzy entity search.
|
||||
|
||||
Request body:
|
||||
{
|
||||
"query": "entity name to search",
|
||||
"entity_types": ["park", "ride", "company"], // optional
|
||||
"include_suggestions": true // optional, default true
|
||||
}
|
||||
|
||||
Response:
|
||||
{
|
||||
"success": true,
|
||||
"query": "original query",
|
||||
"matches": [
|
||||
{
|
||||
"entity_type": "park",
|
||||
"name": "Cedar Point",
|
||||
"slug": "cedar-point",
|
||||
"score": 0.95,
|
||||
"confidence": "high",
|
||||
"match_reason": "Text similarity with 'Cedar Point'",
|
||||
"url": "/parks/cedar-point/",
|
||||
"entity_id": 123
|
||||
}
|
||||
],
|
||||
"suggestion": {
|
||||
"suggested_name": "New Entity Name",
|
||||
"entity_type": "park",
|
||||
"requires_authentication": true,
|
||||
"login_prompt": "Log in to suggest adding...",
|
||||
"signup_prompt": "Sign up to contribute...",
|
||||
"creation_hint": "Help expand ThrillWiki..."
|
||||
},
|
||||
"user_authenticated": false
|
||||
}
|
||||
"""
|
||||
try:
|
||||
# Parse request data
|
||||
query = request.data.get("query", "").strip()
|
||||
entity_types_raw = request.data.get(
|
||||
"entity_types", ["park", "ride", "company"]
|
||||
)
|
||||
include_suggestions = request.data.get("include_suggestions", True)
|
||||
|
||||
# Validate query
|
||||
if not query or len(query) < 2:
|
||||
return Response(
|
||||
{
|
||||
"success": False,
|
||||
"error": "Query must be at least 2 characters long",
|
||||
"code": "INVALID_QUERY",
|
||||
},
|
||||
status=status.HTTP_400_BAD_REQUEST,
|
||||
)
|
||||
|
||||
# Parse and validate entity types
|
||||
entity_types = []
|
||||
valid_types = {"park", "ride", "company"}
|
||||
|
||||
for entity_type in entity_types_raw:
|
||||
if entity_type in valid_types:
|
||||
entity_types.append(EntityType(entity_type))
|
||||
|
||||
if not entity_types:
|
||||
entity_types = [EntityType.PARK, EntityType.RIDE, EntityType.COMPANY]
|
||||
|
||||
# Perform fuzzy matching
|
||||
matches, suggestion = entity_fuzzy_matcher.find_entity(
|
||||
query=query, entity_types=entity_types, user=request.user
|
||||
)
|
||||
|
||||
# Format response
|
||||
response_data = {
|
||||
"success": True,
|
||||
"query": query,
|
||||
"matches": [match.to_dict() for match in matches],
|
||||
"user_authenticated": (
|
||||
request.user.is_authenticated
|
||||
if hasattr(request.user, "is_authenticated")
|
||||
else False
|
||||
),
|
||||
}
|
||||
|
||||
# Include suggestion if requested and available
|
||||
if include_suggestions and suggestion:
|
||||
response_data["suggestion"] = {
|
||||
"suggested_name": suggestion.suggested_name,
|
||||
"entity_type": suggestion.entity_type.value,
|
||||
"requires_authentication": suggestion.requires_authentication,
|
||||
"login_prompt": suggestion.login_prompt,
|
||||
"signup_prompt": suggestion.signup_prompt,
|
||||
"creation_hint": suggestion.creation_hint,
|
||||
}
|
||||
|
||||
return Response(response_data, status=status.HTTP_200_OK)
|
||||
|
||||
except Exception as e:
|
||||
return Response(
|
||||
{
|
||||
"success": False,
|
||||
"error": f"Internal server error: {str(e)}",
|
||||
"code": "INTERNAL_ERROR",
|
||||
},
|
||||
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
)
|
||||
|
||||
|
||||
class EntityNotFoundView(APIView):
|
||||
"""
|
||||
Endpoint specifically for handling entity not found scenarios.
|
||||
|
||||
This view is called when normal entity lookup fails and provides
|
||||
fuzzy matching suggestions along with authentication prompts.
|
||||
"""
|
||||
|
||||
permission_classes = [AllowAny]
|
||||
|
||||
def post(self, request):
|
||||
"""
|
||||
Handle entity not found with suggestions.
|
||||
|
||||
Request body:
|
||||
{
|
||||
"original_query": "what user searched for",
|
||||
"attempted_slug": "slug-that-failed", // optional
|
||||
"entity_type": "park", // optional, inferred from context
|
||||
"context": { // optional context information
|
||||
"park_slug": "park-slug-if-searching-for-ride",
|
||||
"source_page": "page where search originated"
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
original_query = request.data.get("original_query", "").strip()
|
||||
attempted_slug = request.data.get("attempted_slug", "")
|
||||
entity_type_hint = request.data.get("entity_type")
|
||||
context = request.data.get("context", {})
|
||||
|
||||
if not original_query:
|
||||
return Response(
|
||||
{
|
||||
"success": False,
|
||||
"error": "original_query is required",
|
||||
"code": "MISSING_QUERY",
|
||||
},
|
||||
status=status.HTTP_400_BAD_REQUEST,
|
||||
)
|
||||
|
||||
# Determine entity types to search based on context
|
||||
entity_types = []
|
||||
if entity_type_hint:
|
||||
try:
|
||||
entity_types = [EntityType(entity_type_hint)]
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
# If we have park context, prioritize ride searches
|
||||
if context.get("park_slug") and not entity_types:
|
||||
entity_types = [EntityType.RIDE, EntityType.PARK]
|
||||
|
||||
# Default to all types if not specified
|
||||
if not entity_types:
|
||||
entity_types = [EntityType.PARK, EntityType.RIDE, EntityType.COMPANY]
|
||||
|
||||
# Try fuzzy matching on the original query
|
||||
matches, suggestion = entity_fuzzy_matcher.find_entity(
|
||||
query=original_query, entity_types=entity_types, user=request.user
|
||||
)
|
||||
|
||||
# If no matches on original query, try the attempted slug
|
||||
if not matches and attempted_slug:
|
||||
# Convert slug back to readable name for fuzzy matching
|
||||
slug_as_name = attempted_slug.replace("-", " ").title()
|
||||
matches, suggestion = entity_fuzzy_matcher.find_entity(
|
||||
query=slug_as_name, entity_types=entity_types, user=request.user
|
||||
)
|
||||
|
||||
# Prepare response with detailed context
|
||||
response_data = {
|
||||
"success": True,
|
||||
"original_query": original_query,
|
||||
"attempted_slug": attempted_slug,
|
||||
"context": context,
|
||||
"matches": [match.to_dict() for match in matches],
|
||||
"user_authenticated": (
|
||||
request.user.is_authenticated
|
||||
if hasattr(request.user, "is_authenticated")
|
||||
else False
|
||||
),
|
||||
"has_matches": len(matches) > 0,
|
||||
}
|
||||
|
||||
# Always include suggestion for entity not found scenarios
|
||||
if suggestion:
|
||||
response_data["suggestion"] = {
|
||||
"suggested_name": suggestion.suggested_name,
|
||||
"entity_type": suggestion.entity_type.value,
|
||||
"requires_authentication": suggestion.requires_authentication,
|
||||
"login_prompt": suggestion.login_prompt,
|
||||
"signup_prompt": suggestion.signup_prompt,
|
||||
"creation_hint": suggestion.creation_hint,
|
||||
}
|
||||
|
||||
return Response(response_data, status=status.HTTP_200_OK)
|
||||
|
||||
except Exception as e:
|
||||
return Response(
|
||||
{
|
||||
"success": False,
|
||||
"error": f"Internal server error: {str(e)}",
|
||||
"code": "INTERNAL_ERROR",
|
||||
},
|
||||
status=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
)
|
||||
|
||||
|
||||
@method_decorator(csrf_exempt, name="dispatch")
|
||||
class QuickEntitySuggestionView(APIView):
|
||||
"""
|
||||
Lightweight endpoint for quick entity suggestions (e.g., autocomplete).
|
||||
"""
|
||||
|
||||
permission_classes = [AllowAny]
|
||||
|
||||
def get(self, request):
|
||||
"""
|
||||
Get quick entity suggestions.
|
||||
|
||||
Query parameters:
|
||||
- q: query string
|
||||
- types: comma-separated entity types (park,ride,company)
|
||||
- limit: max results (default 5)
|
||||
"""
|
||||
try:
|
||||
query = request.GET.get("q", "").strip()
|
||||
types_param = request.GET.get("types", "park,ride,company")
|
||||
limit = min(int(request.GET.get("limit", 5)), 10) # Cap at 10
|
||||
|
||||
if not query or len(query) < 2:
|
||||
return Response(
|
||||
{"suggestions": [], "query": query}, status=status.HTTP_200_OK
|
||||
)
|
||||
|
||||
# Parse entity types
|
||||
entity_types = []
|
||||
for type_str in types_param.split(","):
|
||||
type_str = type_str.strip()
|
||||
if type_str in ["park", "ride", "company"]:
|
||||
entity_types.append(EntityType(type_str))
|
||||
|
||||
if not entity_types:
|
||||
entity_types = [EntityType.PARK, EntityType.RIDE, EntityType.COMPANY]
|
||||
|
||||
# Get fuzzy matches
|
||||
matches, _ = entity_fuzzy_matcher.find_entity(
|
||||
query=query, entity_types=entity_types, user=request.user
|
||||
)
|
||||
|
||||
# Format as simple suggestions
|
||||
suggestions = []
|
||||
for match in matches[:limit]:
|
||||
suggestions.append(
|
||||
{
|
||||
"name": match.name,
|
||||
"type": match.entity_type.value,
|
||||
"slug": match.slug,
|
||||
"url": match.url,
|
||||
"score": match.score,
|
||||
"confidence": match.confidence,
|
||||
}
|
||||
)
|
||||
|
||||
return Response(
|
||||
{"suggestions": suggestions, "query": query, "count": len(suggestions)},
|
||||
status=status.HTTP_200_OK,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
return Response(
|
||||
{"suggestions": [], "query": request.GET.get("q", ""), "error": str(e)},
|
||||
status=status.HTTP_200_OK,
|
||||
) # Return 200 even on errors for autocomplete
|
||||
|
||||
|
||||
# Utility function for other views to use
|
||||
def get_entity_suggestions(
|
||||
query: str, entity_types: Optional[List[str]] = None, user=None
|
||||
):
|
||||
"""
|
||||
Utility function for other Django views to get entity suggestions.
|
||||
|
||||
Args:
|
||||
query: Search query
|
||||
entity_types: List of entity type strings
|
||||
user: Django user object
|
||||
|
||||
Returns:
|
||||
Tuple of (matches, suggestion)
|
||||
"""
|
||||
try:
|
||||
# Convert string types to EntityType enums
|
||||
parsed_types = []
|
||||
if entity_types:
|
||||
for entity_type in entity_types:
|
||||
try:
|
||||
parsed_types.append(EntityType(entity_type))
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
if not parsed_types:
|
||||
parsed_types = [EntityType.PARK, EntityType.RIDE, EntityType.COMPANY]
|
||||
|
||||
return entity_fuzzy_matcher.find_entity(
|
||||
query=query, entity_types=parsed_types, user=user
|
||||
)
|
||||
except Exception:
|
||||
return [], None
|
||||
@@ -317,55 +317,55 @@ class PgHistoryEventsAdmin(admin.ModelAdmin):
|
||||
"""Admin interface for pghistory Events"""
|
||||
|
||||
list_display = (
|
||||
'pgh_id',
|
||||
'pgh_created_at',
|
||||
'pgh_label',
|
||||
'pgh_model',
|
||||
'pgh_obj_id',
|
||||
'pgh_context_display',
|
||||
"pgh_id",
|
||||
"pgh_created_at",
|
||||
"pgh_label",
|
||||
"pgh_model",
|
||||
"pgh_obj_id",
|
||||
"pgh_context_display",
|
||||
)
|
||||
list_filter = (
|
||||
'pgh_label',
|
||||
'pgh_model',
|
||||
'pgh_created_at',
|
||||
"pgh_label",
|
||||
"pgh_model",
|
||||
"pgh_created_at",
|
||||
)
|
||||
search_fields = (
|
||||
'pgh_obj_id',
|
||||
'pgh_context',
|
||||
"pgh_obj_id",
|
||||
"pgh_context",
|
||||
)
|
||||
readonly_fields = (
|
||||
'pgh_id',
|
||||
'pgh_created_at',
|
||||
'pgh_label',
|
||||
'pgh_model',
|
||||
'pgh_obj_id',
|
||||
'pgh_context',
|
||||
'pgh_data',
|
||||
"pgh_id",
|
||||
"pgh_created_at",
|
||||
"pgh_label",
|
||||
"pgh_model",
|
||||
"pgh_obj_id",
|
||||
"pgh_context",
|
||||
"pgh_data",
|
||||
)
|
||||
date_hierarchy = 'pgh_created_at'
|
||||
ordering = ('-pgh_created_at',)
|
||||
date_hierarchy = "pgh_created_at"
|
||||
ordering = ("-pgh_created_at",)
|
||||
|
||||
fieldsets = (
|
||||
(
|
||||
'Event Information',
|
||||
"Event Information",
|
||||
{
|
||||
'fields': (
|
||||
'pgh_id',
|
||||
'pgh_created_at',
|
||||
'pgh_label',
|
||||
'pgh_model',
|
||||
'pgh_obj_id',
|
||||
"fields": (
|
||||
"pgh_id",
|
||||
"pgh_created_at",
|
||||
"pgh_label",
|
||||
"pgh_model",
|
||||
"pgh_obj_id",
|
||||
)
|
||||
},
|
||||
),
|
||||
(
|
||||
'Context & Data',
|
||||
"Context & Data",
|
||||
{
|
||||
'fields': (
|
||||
'pgh_context',
|
||||
'pgh_data',
|
||||
"fields": (
|
||||
"pgh_context",
|
||||
"pgh_data",
|
||||
),
|
||||
'classes': ('collapse',),
|
||||
"classes": ("collapse",),
|
||||
},
|
||||
),
|
||||
)
|
||||
@@ -392,7 +392,7 @@ class PgHistoryEventsAdmin(admin.ModelAdmin):
|
||||
|
||||
def has_delete_permission(self, request, obj=None):
|
||||
"""Prevent deletion of history events"""
|
||||
return getattr(request.user, 'is_superuser', False)
|
||||
return getattr(request.user, "is_superuser", False)
|
||||
|
||||
|
||||
# Register the models with their admin classes
|
||||
|
||||
@@ -5,6 +5,7 @@ from .models.company import Company
|
||||
from .models.rides import Ride, RideModel, RollerCoasterStats
|
||||
from .models.location import RideLocation
|
||||
from .models.reviews import RideReview
|
||||
from .models.rankings import RideRanking, RidePairComparison, RankingSnapshot
|
||||
|
||||
|
||||
class ManufacturerAdmin(admin.ModelAdmin):
|
||||
@@ -484,4 +485,222 @@ class CompanyAdmin(admin.ModelAdmin):
|
||||
return ", ".join(obj.roles) if obj.roles else "No roles"
|
||||
|
||||
|
||||
@admin.register(RideRanking)
|
||||
class RideRankingAdmin(admin.ModelAdmin):
|
||||
"""Admin interface for ride rankings"""
|
||||
|
||||
list_display = (
|
||||
"rank",
|
||||
"ride_name",
|
||||
"park_name",
|
||||
"winning_percentage_display",
|
||||
"wins",
|
||||
"losses",
|
||||
"ties",
|
||||
"average_rating",
|
||||
"mutual_riders_count",
|
||||
"last_calculated",
|
||||
)
|
||||
list_filter = (
|
||||
"ride__category",
|
||||
"last_calculated",
|
||||
"calculation_version",
|
||||
)
|
||||
search_fields = (
|
||||
"ride__name",
|
||||
"ride__park__name",
|
||||
)
|
||||
readonly_fields = (
|
||||
"ride",
|
||||
"rank",
|
||||
"wins",
|
||||
"losses",
|
||||
"ties",
|
||||
"winning_percentage",
|
||||
"mutual_riders_count",
|
||||
"comparison_count",
|
||||
"average_rating",
|
||||
"last_calculated",
|
||||
"calculation_version",
|
||||
"total_comparisons",
|
||||
)
|
||||
ordering = ["rank"]
|
||||
|
||||
fieldsets = (
|
||||
(
|
||||
"Ride Information",
|
||||
{"fields": ("ride",)},
|
||||
),
|
||||
(
|
||||
"Ranking Metrics",
|
||||
{
|
||||
"fields": (
|
||||
"rank",
|
||||
"winning_percentage",
|
||||
"wins",
|
||||
"losses",
|
||||
"ties",
|
||||
"total_comparisons",
|
||||
)
|
||||
},
|
||||
),
|
||||
(
|
||||
"Additional Metrics",
|
||||
{
|
||||
"fields": (
|
||||
"average_rating",
|
||||
"mutual_riders_count",
|
||||
"comparison_count",
|
||||
)
|
||||
},
|
||||
),
|
||||
(
|
||||
"Calculation Info",
|
||||
{
|
||||
"fields": (
|
||||
"last_calculated",
|
||||
"calculation_version",
|
||||
),
|
||||
"classes": ("collapse",),
|
||||
},
|
||||
),
|
||||
)
|
||||
|
||||
@admin.display(description="Ride")
|
||||
def ride_name(self, obj):
|
||||
return obj.ride.name
|
||||
|
||||
@admin.display(description="Park")
|
||||
def park_name(self, obj):
|
||||
return obj.ride.park.name
|
||||
|
||||
@admin.display(description="Win %")
|
||||
def winning_percentage_display(self, obj):
|
||||
return f"{obj.winning_percentage:.1%}"
|
||||
|
||||
def has_add_permission(self, request):
|
||||
# Rankings are calculated automatically
|
||||
return False
|
||||
|
||||
def has_change_permission(self, request, obj=None):
|
||||
# Rankings are read-only
|
||||
return False
|
||||
|
||||
|
||||
@admin.register(RidePairComparison)
|
||||
class RidePairComparisonAdmin(admin.ModelAdmin):
|
||||
"""Admin interface for ride pair comparisons"""
|
||||
|
||||
list_display = (
|
||||
"comparison_summary",
|
||||
"ride_a_name",
|
||||
"ride_b_name",
|
||||
"winner_display",
|
||||
"ride_a_wins",
|
||||
"ride_b_wins",
|
||||
"ties",
|
||||
"mutual_riders_count",
|
||||
"last_calculated",
|
||||
)
|
||||
list_filter = ("last_calculated",)
|
||||
search_fields = (
|
||||
"ride_a__name",
|
||||
"ride_b__name",
|
||||
"ride_a__park__name",
|
||||
"ride_b__park__name",
|
||||
)
|
||||
readonly_fields = (
|
||||
"ride_a",
|
||||
"ride_b",
|
||||
"ride_a_wins",
|
||||
"ride_b_wins",
|
||||
"ties",
|
||||
"mutual_riders_count",
|
||||
"ride_a_avg_rating",
|
||||
"ride_b_avg_rating",
|
||||
"last_calculated",
|
||||
"winner",
|
||||
"is_tie",
|
||||
)
|
||||
ordering = ["-mutual_riders_count"]
|
||||
|
||||
@admin.display(description="Comparison")
|
||||
def comparison_summary(self, obj):
|
||||
return f"{obj.ride_a.name} vs {obj.ride_b.name}"
|
||||
|
||||
@admin.display(description="Ride A")
|
||||
def ride_a_name(self, obj):
|
||||
return obj.ride_a.name
|
||||
|
||||
@admin.display(description="Ride B")
|
||||
def ride_b_name(self, obj):
|
||||
return obj.ride_b.name
|
||||
|
||||
@admin.display(description="Winner")
|
||||
def winner_display(self, obj):
|
||||
if obj.is_tie:
|
||||
return "TIE"
|
||||
winner = obj.winner
|
||||
if winner:
|
||||
return winner.name
|
||||
return "N/A"
|
||||
|
||||
def has_add_permission(self, request):
|
||||
# Comparisons are calculated automatically
|
||||
return False
|
||||
|
||||
def has_change_permission(self, request, obj=None):
|
||||
# Comparisons are read-only
|
||||
return False
|
||||
|
||||
|
||||
@admin.register(RankingSnapshot)
|
||||
class RankingSnapshotAdmin(admin.ModelAdmin):
|
||||
"""Admin interface for ranking history snapshots"""
|
||||
|
||||
list_display = (
|
||||
"ride_name",
|
||||
"park_name",
|
||||
"rank",
|
||||
"winning_percentage_display",
|
||||
"snapshot_date",
|
||||
)
|
||||
list_filter = (
|
||||
"snapshot_date",
|
||||
"ride__category",
|
||||
)
|
||||
search_fields = (
|
||||
"ride__name",
|
||||
"ride__park__name",
|
||||
)
|
||||
readonly_fields = (
|
||||
"ride",
|
||||
"rank",
|
||||
"winning_percentage",
|
||||
"snapshot_date",
|
||||
)
|
||||
date_hierarchy = "snapshot_date"
|
||||
ordering = ["-snapshot_date", "rank"]
|
||||
|
||||
@admin.display(description="Ride")
|
||||
def ride_name(self, obj):
|
||||
return obj.ride.name
|
||||
|
||||
@admin.display(description="Park")
|
||||
def park_name(self, obj):
|
||||
return obj.ride.park.name
|
||||
|
||||
@admin.display(description="Win %")
|
||||
def winning_percentage_display(self, obj):
|
||||
return f"{obj.winning_percentage:.1%}"
|
||||
|
||||
def has_add_permission(self, request):
|
||||
# Snapshots are created automatically
|
||||
return False
|
||||
|
||||
def has_change_permission(self, request, obj=None):
|
||||
# Snapshots are read-only
|
||||
return False
|
||||
|
||||
|
||||
admin.site.register(RideLocation, RideLocationAdmin)
|
||||
|
||||
0
backend/apps/rides/management/__init__.py
Normal file
0
backend/apps/rides/management/__init__.py
Normal file
0
backend/apps/rides/management/commands/__init__.py
Normal file
0
backend/apps/rides/management/commands/__init__.py
Normal file
@@ -0,0 +1,36 @@
|
||||
from django.core.management.base import BaseCommand
|
||||
from django.utils import timezone
|
||||
|
||||
from apps.rides.services import RideRankingService
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Calculates and updates ride rankings using the Internet Roller Coaster Poll algorithm"
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument(
|
||||
"--category",
|
||||
type=str,
|
||||
default=None,
|
||||
help="Optional ride category to filter (e.g., RC for roller coasters)",
|
||||
)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
category = options.get("category")
|
||||
|
||||
service = RideRankingService()
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(
|
||||
f"Starting ride ranking calculation at {timezone.now().isoformat()}"
|
||||
)
|
||||
)
|
||||
|
||||
result = service.update_all_rankings(category=category)
|
||||
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(
|
||||
f"Completed ranking calculation: {result.get('rides_ranked', 0)} rides ranked, "
|
||||
f"{result.get('comparisons_made', 0)} comparisons, "
|
||||
f"duration={result.get('duration', 0):.2f}s"
|
||||
)
|
||||
)
|
||||
603
backend/apps/rides/migrations/0006_add_ride_rankings.py
Normal file
603
backend/apps/rides/migrations/0006_add_ride_rankings.py
Normal file
@@ -0,0 +1,603 @@
|
||||
# Generated by Django 5.2.5 on 2025-08-25 00:50
|
||||
|
||||
import django.core.validators
|
||||
import django.db.models.deletion
|
||||
import django.utils.timezone
|
||||
import pgtrigger.compiler
|
||||
import pgtrigger.migrations
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
("pghistory", "0007_auto_20250421_0444"),
|
||||
("rides", "0005_ridelocationevent_ridelocation_insert_insert_and_more"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name="RidePairComparison",
|
||||
fields=[
|
||||
(
|
||||
"id",
|
||||
models.BigAutoField(
|
||||
auto_created=True,
|
||||
primary_key=True,
|
||||
serialize=False,
|
||||
verbose_name="ID",
|
||||
),
|
||||
),
|
||||
(
|
||||
"ride_a_wins",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of mutual riders who rated ride_a higher",
|
||||
),
|
||||
),
|
||||
(
|
||||
"ride_b_wins",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of mutual riders who rated ride_b higher",
|
||||
),
|
||||
),
|
||||
(
|
||||
"ties",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of mutual riders who rated both rides equally",
|
||||
),
|
||||
),
|
||||
(
|
||||
"mutual_riders_count",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Total number of users who have rated both rides",
|
||||
),
|
||||
),
|
||||
(
|
||||
"ride_a_avg_rating",
|
||||
models.DecimalField(
|
||||
blank=True,
|
||||
decimal_places=2,
|
||||
help_text="Average rating of ride_a from mutual riders",
|
||||
max_digits=3,
|
||||
null=True,
|
||||
),
|
||||
),
|
||||
(
|
||||
"ride_b_avg_rating",
|
||||
models.DecimalField(
|
||||
blank=True,
|
||||
decimal_places=2,
|
||||
help_text="Average rating of ride_b from mutual riders",
|
||||
max_digits=3,
|
||||
null=True,
|
||||
),
|
||||
),
|
||||
(
|
||||
"last_calculated",
|
||||
models.DateTimeField(
|
||||
auto_now=True,
|
||||
help_text="When this comparison was last calculated",
|
||||
),
|
||||
),
|
||||
(
|
||||
"ride_a",
|
||||
models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE,
|
||||
related_name="comparisons_as_a",
|
||||
to="rides.ride",
|
||||
),
|
||||
),
|
||||
(
|
||||
"ride_b",
|
||||
models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE,
|
||||
related_name="comparisons_as_b",
|
||||
to="rides.ride",
|
||||
),
|
||||
),
|
||||
],
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name="RidePairComparisonEvent",
|
||||
fields=[
|
||||
("pgh_id", models.AutoField(primary_key=True, serialize=False)),
|
||||
("pgh_created_at", models.DateTimeField(auto_now_add=True)),
|
||||
("pgh_label", models.TextField(help_text="The event label.")),
|
||||
("id", models.BigIntegerField()),
|
||||
(
|
||||
"ride_a_wins",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of mutual riders who rated ride_a higher",
|
||||
),
|
||||
),
|
||||
(
|
||||
"ride_b_wins",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of mutual riders who rated ride_b higher",
|
||||
),
|
||||
),
|
||||
(
|
||||
"ties",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of mutual riders who rated both rides equally",
|
||||
),
|
||||
),
|
||||
(
|
||||
"mutual_riders_count",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Total number of users who have rated both rides",
|
||||
),
|
||||
),
|
||||
(
|
||||
"ride_a_avg_rating",
|
||||
models.DecimalField(
|
||||
blank=True,
|
||||
decimal_places=2,
|
||||
help_text="Average rating of ride_a from mutual riders",
|
||||
max_digits=3,
|
||||
null=True,
|
||||
),
|
||||
),
|
||||
(
|
||||
"ride_b_avg_rating",
|
||||
models.DecimalField(
|
||||
blank=True,
|
||||
decimal_places=2,
|
||||
help_text="Average rating of ride_b from mutual riders",
|
||||
max_digits=3,
|
||||
null=True,
|
||||
),
|
||||
),
|
||||
(
|
||||
"last_calculated",
|
||||
models.DateTimeField(
|
||||
auto_now=True,
|
||||
help_text="When this comparison was last calculated",
|
||||
),
|
||||
),
|
||||
(
|
||||
"pgh_context",
|
||||
models.ForeignKey(
|
||||
db_constraint=False,
|
||||
null=True,
|
||||
on_delete=django.db.models.deletion.DO_NOTHING,
|
||||
related_name="+",
|
||||
to="pghistory.context",
|
||||
),
|
||||
),
|
||||
(
|
||||
"pgh_obj",
|
||||
models.ForeignKey(
|
||||
db_constraint=False,
|
||||
on_delete=django.db.models.deletion.DO_NOTHING,
|
||||
related_name="events",
|
||||
to="rides.ridepaircomparison",
|
||||
),
|
||||
),
|
||||
(
|
||||
"ride_a",
|
||||
models.ForeignKey(
|
||||
db_constraint=False,
|
||||
on_delete=django.db.models.deletion.DO_NOTHING,
|
||||
related_name="+",
|
||||
related_query_name="+",
|
||||
to="rides.ride",
|
||||
),
|
||||
),
|
||||
(
|
||||
"ride_b",
|
||||
models.ForeignKey(
|
||||
db_constraint=False,
|
||||
on_delete=django.db.models.deletion.DO_NOTHING,
|
||||
related_name="+",
|
||||
related_query_name="+",
|
||||
to="rides.ride",
|
||||
),
|
||||
),
|
||||
],
|
||||
options={
|
||||
"abstract": False,
|
||||
},
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name="RideRanking",
|
||||
fields=[
|
||||
(
|
||||
"id",
|
||||
models.BigAutoField(
|
||||
auto_created=True,
|
||||
primary_key=True,
|
||||
serialize=False,
|
||||
verbose_name="ID",
|
||||
),
|
||||
),
|
||||
(
|
||||
"rank",
|
||||
models.PositiveIntegerField(
|
||||
db_index=True, help_text="Overall rank position (1 = best)"
|
||||
),
|
||||
),
|
||||
(
|
||||
"wins",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of rides this ride beats in pairwise comparisons",
|
||||
),
|
||||
),
|
||||
(
|
||||
"losses",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of rides that beat this ride in pairwise comparisons",
|
||||
),
|
||||
),
|
||||
(
|
||||
"ties",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of rides with equal preference in pairwise comparisons",
|
||||
),
|
||||
),
|
||||
(
|
||||
"winning_percentage",
|
||||
models.DecimalField(
|
||||
db_index=True,
|
||||
decimal_places=4,
|
||||
help_text="Win percentage where ties count as 0.5",
|
||||
max_digits=5,
|
||||
validators=[
|
||||
django.core.validators.MinValueValidator(0),
|
||||
django.core.validators.MaxValueValidator(1),
|
||||
],
|
||||
),
|
||||
),
|
||||
(
|
||||
"mutual_riders_count",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Total number of users who have rated this ride",
|
||||
),
|
||||
),
|
||||
(
|
||||
"comparison_count",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of other rides this was compared against",
|
||||
),
|
||||
),
|
||||
(
|
||||
"average_rating",
|
||||
models.DecimalField(
|
||||
blank=True,
|
||||
decimal_places=2,
|
||||
help_text="Average rating from all users who have rated this ride",
|
||||
max_digits=3,
|
||||
null=True,
|
||||
validators=[
|
||||
django.core.validators.MinValueValidator(1),
|
||||
django.core.validators.MaxValueValidator(10),
|
||||
],
|
||||
),
|
||||
),
|
||||
(
|
||||
"last_calculated",
|
||||
models.DateTimeField(
|
||||
default=django.utils.timezone.now,
|
||||
help_text="When this ranking was last calculated",
|
||||
),
|
||||
),
|
||||
(
|
||||
"calculation_version",
|
||||
models.CharField(
|
||||
default="1.0",
|
||||
help_text="Algorithm version used for calculation",
|
||||
max_length=10,
|
||||
),
|
||||
),
|
||||
(
|
||||
"ride",
|
||||
models.OneToOneField(
|
||||
on_delete=django.db.models.deletion.CASCADE,
|
||||
related_name="ranking",
|
||||
to="rides.ride",
|
||||
),
|
||||
),
|
||||
],
|
||||
options={
|
||||
"ordering": ["rank"],
|
||||
},
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name="RideRankingEvent",
|
||||
fields=[
|
||||
("pgh_id", models.AutoField(primary_key=True, serialize=False)),
|
||||
("pgh_created_at", models.DateTimeField(auto_now_add=True)),
|
||||
("pgh_label", models.TextField(help_text="The event label.")),
|
||||
("id", models.BigIntegerField()),
|
||||
(
|
||||
"rank",
|
||||
models.PositiveIntegerField(
|
||||
help_text="Overall rank position (1 = best)"
|
||||
),
|
||||
),
|
||||
(
|
||||
"wins",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of rides this ride beats in pairwise comparisons",
|
||||
),
|
||||
),
|
||||
(
|
||||
"losses",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of rides that beat this ride in pairwise comparisons",
|
||||
),
|
||||
),
|
||||
(
|
||||
"ties",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of rides with equal preference in pairwise comparisons",
|
||||
),
|
||||
),
|
||||
(
|
||||
"winning_percentage",
|
||||
models.DecimalField(
|
||||
decimal_places=4,
|
||||
help_text="Win percentage where ties count as 0.5",
|
||||
max_digits=5,
|
||||
validators=[
|
||||
django.core.validators.MinValueValidator(0),
|
||||
django.core.validators.MaxValueValidator(1),
|
||||
],
|
||||
),
|
||||
),
|
||||
(
|
||||
"mutual_riders_count",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Total number of users who have rated this ride",
|
||||
),
|
||||
),
|
||||
(
|
||||
"comparison_count",
|
||||
models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of other rides this was compared against",
|
||||
),
|
||||
),
|
||||
(
|
||||
"average_rating",
|
||||
models.DecimalField(
|
||||
blank=True,
|
||||
decimal_places=2,
|
||||
help_text="Average rating from all users who have rated this ride",
|
||||
max_digits=3,
|
||||
null=True,
|
||||
validators=[
|
||||
django.core.validators.MinValueValidator(1),
|
||||
django.core.validators.MaxValueValidator(10),
|
||||
],
|
||||
),
|
||||
),
|
||||
(
|
||||
"last_calculated",
|
||||
models.DateTimeField(
|
||||
default=django.utils.timezone.now,
|
||||
help_text="When this ranking was last calculated",
|
||||
),
|
||||
),
|
||||
(
|
||||
"calculation_version",
|
||||
models.CharField(
|
||||
default="1.0",
|
||||
help_text="Algorithm version used for calculation",
|
||||
max_length=10,
|
||||
),
|
||||
),
|
||||
(
|
||||
"pgh_context",
|
||||
models.ForeignKey(
|
||||
db_constraint=False,
|
||||
null=True,
|
||||
on_delete=django.db.models.deletion.DO_NOTHING,
|
||||
related_name="+",
|
||||
to="pghistory.context",
|
||||
),
|
||||
),
|
||||
(
|
||||
"pgh_obj",
|
||||
models.ForeignKey(
|
||||
db_constraint=False,
|
||||
on_delete=django.db.models.deletion.DO_NOTHING,
|
||||
related_name="events",
|
||||
to="rides.rideranking",
|
||||
),
|
||||
),
|
||||
(
|
||||
"ride",
|
||||
models.ForeignKey(
|
||||
db_constraint=False,
|
||||
on_delete=django.db.models.deletion.DO_NOTHING,
|
||||
related_name="+",
|
||||
related_query_name="+",
|
||||
to="rides.ride",
|
||||
),
|
||||
),
|
||||
],
|
||||
options={
|
||||
"abstract": False,
|
||||
},
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name="RankingSnapshot",
|
||||
fields=[
|
||||
(
|
||||
"id",
|
||||
models.BigAutoField(
|
||||
auto_created=True,
|
||||
primary_key=True,
|
||||
serialize=False,
|
||||
verbose_name="ID",
|
||||
),
|
||||
),
|
||||
("rank", models.PositiveIntegerField()),
|
||||
(
|
||||
"winning_percentage",
|
||||
models.DecimalField(decimal_places=4, max_digits=5),
|
||||
),
|
||||
(
|
||||
"snapshot_date",
|
||||
models.DateField(
|
||||
db_index=True,
|
||||
help_text="Date when this ranking snapshot was taken",
|
||||
),
|
||||
),
|
||||
(
|
||||
"ride",
|
||||
models.ForeignKey(
|
||||
on_delete=django.db.models.deletion.CASCADE,
|
||||
related_name="ranking_history",
|
||||
to="rides.ride",
|
||||
),
|
||||
),
|
||||
],
|
||||
options={
|
||||
"ordering": ["-snapshot_date", "rank"],
|
||||
"indexes": [
|
||||
models.Index(
|
||||
fields=["snapshot_date", "rank"],
|
||||
name="rides_ranki_snapsho_8e2657_idx",
|
||||
),
|
||||
models.Index(
|
||||
fields=["ride", "-snapshot_date"],
|
||||
name="rides_ranki_ride_id_827bb9_idx",
|
||||
),
|
||||
],
|
||||
"unique_together": {("ride", "snapshot_date")},
|
||||
},
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name="ridepaircomparison",
|
||||
index=models.Index(
|
||||
fields=["ride_a", "ride_b"], name="rides_ridep_ride_a__eb0674_idx"
|
||||
),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name="ridepaircomparison",
|
||||
index=models.Index(
|
||||
fields=["last_calculated"], name="rides_ridep_last_ca_bd9f6c_idx"
|
||||
),
|
||||
),
|
||||
migrations.AlterUniqueTogether(
|
||||
name="ridepaircomparison",
|
||||
unique_together={("ride_a", "ride_b")},
|
||||
),
|
||||
pgtrigger.migrations.AddTrigger(
|
||||
model_name="ridepaircomparison",
|
||||
trigger=pgtrigger.compiler.Trigger(
|
||||
name="insert_insert",
|
||||
sql=pgtrigger.compiler.UpsertTriggerSql(
|
||||
func='INSERT INTO "rides_ridepaircomparisonevent" ("id", "last_calculated", "mutual_riders_count", "pgh_context_id", "pgh_created_at", "pgh_label", "pgh_obj_id", "ride_a_avg_rating", "ride_a_id", "ride_a_wins", "ride_b_avg_rating", "ride_b_id", "ride_b_wins", "ties") VALUES (NEW."id", NEW."last_calculated", NEW."mutual_riders_count", _pgh_attach_context(), NOW(), \'insert\', NEW."id", NEW."ride_a_avg_rating", NEW."ride_a_id", NEW."ride_a_wins", NEW."ride_b_avg_rating", NEW."ride_b_id", NEW."ride_b_wins", NEW."ties"); RETURN NULL;',
|
||||
hash="6a640e10fcfd58c48029ee5b84ea7f0826f50022",
|
||||
operation="INSERT",
|
||||
pgid="pgtrigger_insert_insert_9ad59",
|
||||
table="rides_ridepaircomparison",
|
||||
when="AFTER",
|
||||
),
|
||||
),
|
||||
),
|
||||
pgtrigger.migrations.AddTrigger(
|
||||
model_name="ridepaircomparison",
|
||||
trigger=pgtrigger.compiler.Trigger(
|
||||
name="update_update",
|
||||
sql=pgtrigger.compiler.UpsertTriggerSql(
|
||||
condition="WHEN (OLD.* IS DISTINCT FROM NEW.*)",
|
||||
func='INSERT INTO "rides_ridepaircomparisonevent" ("id", "last_calculated", "mutual_riders_count", "pgh_context_id", "pgh_created_at", "pgh_label", "pgh_obj_id", "ride_a_avg_rating", "ride_a_id", "ride_a_wins", "ride_b_avg_rating", "ride_b_id", "ride_b_wins", "ties") VALUES (NEW."id", NEW."last_calculated", NEW."mutual_riders_count", _pgh_attach_context(), NOW(), \'update\', NEW."id", NEW."ride_a_avg_rating", NEW."ride_a_id", NEW."ride_a_wins", NEW."ride_b_avg_rating", NEW."ride_b_id", NEW."ride_b_wins", NEW."ties"); RETURN NULL;',
|
||||
hash="a77eee0b791bada3f84f008dabd7486c66b03fa6",
|
||||
operation="UPDATE",
|
||||
pgid="pgtrigger_update_update_73b31",
|
||||
table="rides_ridepaircomparison",
|
||||
when="AFTER",
|
||||
),
|
||||
),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name="rideranking",
|
||||
index=models.Index(fields=["rank"], name="rides_rider_rank_ea4706_idx"),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name="rideranking",
|
||||
index=models.Index(
|
||||
fields=["winning_percentage", "-mutual_riders_count"],
|
||||
name="rides_rider_winning_d9b3e8_idx",
|
||||
),
|
||||
),
|
||||
migrations.AddIndex(
|
||||
model_name="rideranking",
|
||||
index=models.Index(
|
||||
fields=["ride", "last_calculated"],
|
||||
name="rides_rider_ride_id_ece73d_idx",
|
||||
),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="rideranking",
|
||||
constraint=models.CheckConstraint(
|
||||
condition=models.Q(
|
||||
("winning_percentage__gte", 0), ("winning_percentage__lte", 1)
|
||||
),
|
||||
name="rideranking_winning_percentage_range",
|
||||
violation_error_message="Winning percentage must be between 0 and 1",
|
||||
),
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="rideranking",
|
||||
constraint=models.CheckConstraint(
|
||||
condition=models.Q(
|
||||
("average_rating__isnull", True),
|
||||
models.Q(("average_rating__gte", 1), ("average_rating__lte", 10)),
|
||||
_connector="OR",
|
||||
),
|
||||
name="rideranking_average_rating_range",
|
||||
violation_error_message="Average rating must be between 1 and 10",
|
||||
),
|
||||
),
|
||||
pgtrigger.migrations.AddTrigger(
|
||||
model_name="rideranking",
|
||||
trigger=pgtrigger.compiler.Trigger(
|
||||
name="insert_insert",
|
||||
sql=pgtrigger.compiler.UpsertTriggerSql(
|
||||
func='INSERT INTO "rides_riderankingevent" ("average_rating", "calculation_version", "comparison_count", "id", "last_calculated", "losses", "mutual_riders_count", "pgh_context_id", "pgh_created_at", "pgh_label", "pgh_obj_id", "rank", "ride_id", "ties", "winning_percentage", "wins") VALUES (NEW."average_rating", NEW."calculation_version", NEW."comparison_count", NEW."id", NEW."last_calculated", NEW."losses", NEW."mutual_riders_count", _pgh_attach_context(), NOW(), \'insert\', NEW."id", NEW."rank", NEW."ride_id", NEW."ties", NEW."winning_percentage", NEW."wins"); RETURN NULL;',
|
||||
hash="c5f9dced5824a55e6f36e476eb382ed770aa5716",
|
||||
operation="INSERT",
|
||||
pgid="pgtrigger_insert_insert_01af3",
|
||||
table="rides_rideranking",
|
||||
when="AFTER",
|
||||
),
|
||||
),
|
||||
),
|
||||
pgtrigger.migrations.AddTrigger(
|
||||
model_name="rideranking",
|
||||
trigger=pgtrigger.compiler.Trigger(
|
||||
name="update_update",
|
||||
sql=pgtrigger.compiler.UpsertTriggerSql(
|
||||
condition="WHEN (OLD.* IS DISTINCT FROM NEW.*)",
|
||||
func='INSERT INTO "rides_riderankingevent" ("average_rating", "calculation_version", "comparison_count", "id", "last_calculated", "losses", "mutual_riders_count", "pgh_context_id", "pgh_created_at", "pgh_label", "pgh_obj_id", "rank", "ride_id", "ties", "winning_percentage", "wins") VALUES (NEW."average_rating", NEW."calculation_version", NEW."comparison_count", NEW."id", NEW."last_calculated", NEW."losses", NEW."mutual_riders_count", _pgh_attach_context(), NOW(), \'update\', NEW."id", NEW."rank", NEW."ride_id", NEW."ties", NEW."winning_percentage", NEW."wins"); RETURN NULL;',
|
||||
hash="363e44ce3c87e8b66406d63d6f1b26ad604c79d2",
|
||||
operation="UPDATE",
|
||||
pgid="pgtrigger_update_update_c3f27",
|
||||
table="rides_rideranking",
|
||||
when="AFTER",
|
||||
),
|
||||
),
|
||||
),
|
||||
]
|
||||
@@ -8,9 +8,10 @@ The Company model is aliased as Manufacturer to clarify its role as ride manufac
|
||||
while maintaining backward compatibility through the Company alias.
|
||||
"""
|
||||
|
||||
from .rides import Ride, RideModel, RollerCoasterStats, Categories
|
||||
from .rides import Ride, RideModel, RollerCoasterStats, Categories, CATEGORY_CHOICES
|
||||
from .location import RideLocation
|
||||
from .reviews import RideReview
|
||||
from .rankings import RideRanking, RidePairComparison, RankingSnapshot
|
||||
|
||||
__all__ = [
|
||||
# Primary models
|
||||
@@ -19,6 +20,10 @@ __all__ = [
|
||||
"RollerCoasterStats",
|
||||
"RideLocation",
|
||||
"RideReview",
|
||||
# Rankings
|
||||
"RideRanking",
|
||||
"RidePairComparison",
|
||||
"RankingSnapshot",
|
||||
# Shared constants
|
||||
"Categories",
|
||||
]
|
||||
|
||||
212
backend/apps/rides/models/rankings.py
Normal file
212
backend/apps/rides/models/rankings.py
Normal file
@@ -0,0 +1,212 @@
|
||||
"""
|
||||
Models for ride ranking system using Internet Roller Coaster Poll algorithm.
|
||||
|
||||
This system calculates rankings based on pairwise comparisons between rides,
|
||||
where each ride is compared to every other ride to determine which one
|
||||
more riders preferred.
|
||||
"""
|
||||
|
||||
from django.db import models
|
||||
from django.utils import timezone
|
||||
from django.core.validators import MinValueValidator, MaxValueValidator
|
||||
import pghistory
|
||||
|
||||
|
||||
@pghistory.track()
|
||||
class RideRanking(models.Model):
|
||||
"""
|
||||
Stores calculated rankings for rides using the Internet Roller Coaster Poll algorithm.
|
||||
|
||||
Rankings are recalculated daily based on user reviews/ratings.
|
||||
Each ride's rank is determined by its winning percentage in pairwise comparisons.
|
||||
"""
|
||||
|
||||
ride = models.OneToOneField(
|
||||
"rides.Ride", on_delete=models.CASCADE, related_name="ranking"
|
||||
)
|
||||
|
||||
# Core ranking metrics
|
||||
rank = models.PositiveIntegerField(
|
||||
db_index=True, help_text="Overall rank position (1 = best)"
|
||||
)
|
||||
wins = models.PositiveIntegerField(
|
||||
default=0, help_text="Number of rides this ride beats in pairwise comparisons"
|
||||
)
|
||||
losses = models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of rides that beat this ride in pairwise comparisons",
|
||||
)
|
||||
ties = models.PositiveIntegerField(
|
||||
default=0,
|
||||
help_text="Number of rides with equal preference in pairwise comparisons",
|
||||
)
|
||||
winning_percentage = models.DecimalField(
|
||||
max_digits=5,
|
||||
decimal_places=4,
|
||||
validators=[MinValueValidator(0), MaxValueValidator(1)],
|
||||
db_index=True,
|
||||
help_text="Win percentage where ties count as 0.5",
|
||||
)
|
||||
|
||||
# Additional metrics
|
||||
mutual_riders_count = models.PositiveIntegerField(
|
||||
default=0, help_text="Total number of users who have rated this ride"
|
||||
)
|
||||
comparison_count = models.PositiveIntegerField(
|
||||
default=0, help_text="Number of other rides this was compared against"
|
||||
)
|
||||
average_rating = models.DecimalField(
|
||||
max_digits=3,
|
||||
decimal_places=2,
|
||||
null=True,
|
||||
blank=True,
|
||||
validators=[MinValueValidator(1), MaxValueValidator(10)],
|
||||
help_text="Average rating from all users who have rated this ride",
|
||||
)
|
||||
|
||||
# Metadata
|
||||
last_calculated = models.DateTimeField(
|
||||
default=timezone.now, help_text="When this ranking was last calculated"
|
||||
)
|
||||
calculation_version = models.CharField(
|
||||
max_length=10, default="1.0", help_text="Algorithm version used for calculation"
|
||||
)
|
||||
|
||||
class Meta:
|
||||
ordering = ["rank"]
|
||||
indexes = [
|
||||
models.Index(fields=["rank"]),
|
||||
models.Index(fields=["winning_percentage", "-mutual_riders_count"]),
|
||||
models.Index(fields=["ride", "last_calculated"]),
|
||||
]
|
||||
constraints = [
|
||||
models.CheckConstraint(
|
||||
name="rideranking_winning_percentage_range",
|
||||
check=models.Q(winning_percentage__gte=0)
|
||||
& models.Q(winning_percentage__lte=1),
|
||||
violation_error_message="Winning percentage must be between 0 and 1",
|
||||
),
|
||||
models.CheckConstraint(
|
||||
name="rideranking_average_rating_range",
|
||||
check=models.Q(average_rating__isnull=True)
|
||||
| (models.Q(average_rating__gte=1) & models.Q(average_rating__lte=10)),
|
||||
violation_error_message="Average rating must be between 1 and 10",
|
||||
),
|
||||
]
|
||||
|
||||
def __str__(self):
|
||||
return f"#{self.rank} - {self.ride.name} ({self.winning_percentage:.1%})"
|
||||
|
||||
@property
|
||||
def total_comparisons(self):
|
||||
"""Total number of pairwise comparisons (wins + losses + ties)."""
|
||||
return self.wins + self.losses + self.ties
|
||||
|
||||
|
||||
@pghistory.track()
|
||||
class RidePairComparison(models.Model):
|
||||
"""
|
||||
Caches pairwise comparison results between two rides.
|
||||
|
||||
This model stores the results of comparing two rides based on mutual riders
|
||||
(users who have rated both rides). It's used to speed up ranking calculations.
|
||||
"""
|
||||
|
||||
ride_a = models.ForeignKey(
|
||||
"rides.Ride", on_delete=models.CASCADE, related_name="comparisons_as_a"
|
||||
)
|
||||
ride_b = models.ForeignKey(
|
||||
"rides.Ride", on_delete=models.CASCADE, related_name="comparisons_as_b"
|
||||
)
|
||||
|
||||
# Comparison results
|
||||
ride_a_wins = models.PositiveIntegerField(
|
||||
default=0, help_text="Number of mutual riders who rated ride_a higher"
|
||||
)
|
||||
ride_b_wins = models.PositiveIntegerField(
|
||||
default=0, help_text="Number of mutual riders who rated ride_b higher"
|
||||
)
|
||||
ties = models.PositiveIntegerField(
|
||||
default=0, help_text="Number of mutual riders who rated both rides equally"
|
||||
)
|
||||
|
||||
# Metrics
|
||||
mutual_riders_count = models.PositiveIntegerField(
|
||||
default=0, help_text="Total number of users who have rated both rides"
|
||||
)
|
||||
ride_a_avg_rating = models.DecimalField(
|
||||
max_digits=3,
|
||||
decimal_places=2,
|
||||
null=True,
|
||||
blank=True,
|
||||
help_text="Average rating of ride_a from mutual riders",
|
||||
)
|
||||
ride_b_avg_rating = models.DecimalField(
|
||||
max_digits=3,
|
||||
decimal_places=2,
|
||||
null=True,
|
||||
blank=True,
|
||||
help_text="Average rating of ride_b from mutual riders",
|
||||
)
|
||||
|
||||
# Metadata
|
||||
last_calculated = models.DateTimeField(
|
||||
auto_now=True, help_text="When this comparison was last calculated"
|
||||
)
|
||||
|
||||
class Meta:
|
||||
unique_together = [["ride_a", "ride_b"]]
|
||||
indexes = [
|
||||
models.Index(fields=["ride_a", "ride_b"]),
|
||||
models.Index(fields=["last_calculated"]),
|
||||
]
|
||||
|
||||
def __str__(self):
|
||||
winner = "TIE"
|
||||
if self.ride_a_wins > self.ride_b_wins:
|
||||
winner = self.ride_a.name
|
||||
elif self.ride_b_wins > self.ride_a_wins:
|
||||
winner = self.ride_b.name
|
||||
return f"{self.ride_a.name} vs {self.ride_b.name} - Winner: {winner}"
|
||||
|
||||
@property
|
||||
def winner(self):
|
||||
"""Returns the winning ride or None for a tie."""
|
||||
if self.ride_a_wins > self.ride_b_wins:
|
||||
return self.ride_a
|
||||
elif self.ride_b_wins > self.ride_a_wins:
|
||||
return self.ride_b
|
||||
return None
|
||||
|
||||
@property
|
||||
def is_tie(self):
|
||||
"""Returns True if the comparison resulted in a tie."""
|
||||
return self.ride_a_wins == self.ride_b_wins
|
||||
|
||||
|
||||
class RankingSnapshot(models.Model):
|
||||
"""
|
||||
Stores historical snapshots of rankings for tracking changes over time.
|
||||
|
||||
This allows us to show ranking trends and movements.
|
||||
"""
|
||||
|
||||
ride = models.ForeignKey(
|
||||
"rides.Ride", on_delete=models.CASCADE, related_name="ranking_history"
|
||||
)
|
||||
rank = models.PositiveIntegerField()
|
||||
winning_percentage = models.DecimalField(max_digits=5, decimal_places=4)
|
||||
snapshot_date = models.DateField(
|
||||
db_index=True, help_text="Date when this ranking snapshot was taken"
|
||||
)
|
||||
|
||||
class Meta:
|
||||
unique_together = [["ride", "snapshot_date"]]
|
||||
ordering = ["-snapshot_date", "rank"]
|
||||
indexes = [
|
||||
models.Index(fields=["snapshot_date", "rank"]),
|
||||
models.Index(fields=["ride", "-snapshot_date"]),
|
||||
]
|
||||
|
||||
def __str__(self):
|
||||
return f"{self.ride.name} - Rank #{self.rank} on {self.snapshot_date}"
|
||||
@@ -1,6 +1,7 @@
|
||||
from django.db import models
|
||||
from django.utils.text import slugify
|
||||
from django.contrib.contenttypes.fields import GenericRelation
|
||||
from django.db.models import Avg
|
||||
from apps.core.models import TrackedModel
|
||||
from .company import Company
|
||||
import pghistory
|
||||
@@ -56,7 +57,11 @@ class RideModel(TrackedModel):
|
||||
|
||||
@pghistory.track()
|
||||
class Ride(TrackedModel):
|
||||
"""Model for individual ride installations at parks"""
|
||||
"""Model for individual ride installations at parks
|
||||
|
||||
Note: The average_rating field is denormalized and refreshed by background
|
||||
jobs. Use selectors or annotations for real-time calculations if needed.
|
||||
"""
|
||||
|
||||
STATUS_CHOICES = [
|
||||
("", "Select status"),
|
||||
|
||||
@@ -8,7 +8,7 @@ from django.db.models import QuerySet, Q, Count, Avg, Prefetch
|
||||
from django.contrib.gis.geos import Point
|
||||
from django.contrib.gis.measure import Distance
|
||||
|
||||
from .models import Ride, RideModel, RideReview
|
||||
from .models import Ride, RideModel, RideReview, CATEGORY_CHOICES
|
||||
|
||||
|
||||
def ride_list_for_display(
|
||||
@@ -32,15 +32,15 @@ def ride_list_for_display(
|
||||
"ride_model",
|
||||
"park_area",
|
||||
)
|
||||
.prefetch_related("park__location", "location")
|
||||
.prefetch_related("park__location")
|
||||
.annotate(average_rating_calculated=Avg("reviews__rating"))
|
||||
)
|
||||
|
||||
if filters:
|
||||
if "status" in filters:
|
||||
queryset = queryset.filter(status=filters["status"])
|
||||
if "category" in filters:
|
||||
queryset = queryset.filter(category=filters["category"])
|
||||
if "status" in filters and filters["status"]:
|
||||
queryset = queryset.filter(status__in=filters["status"])
|
||||
if "category" in filters and filters["category"]:
|
||||
queryset = queryset.filter(category__in=filters["category"])
|
||||
if "manufacturer" in filters:
|
||||
queryset = queryset.filter(manufacturer=filters["manufacturer"])
|
||||
if "park" in filters:
|
||||
@@ -81,7 +81,6 @@ def ride_detail_optimized(*, slug: str, park_slug: str) -> Ride:
|
||||
)
|
||||
.prefetch_related(
|
||||
"park__location",
|
||||
"location",
|
||||
Prefetch(
|
||||
"reviews",
|
||||
queryset=RideReview.objects.select_related("user").filter(
|
||||
@@ -164,7 +163,7 @@ def rides_in_park(*, park_slug: str) -> QuerySet[Ride]:
|
||||
return (
|
||||
Ride.objects.filter(park__slug=park_slug)
|
||||
.select_related("manufacturer", "designer", "ride_model", "park_area")
|
||||
.prefetch_related("location")
|
||||
.prefetch_related()
|
||||
.annotate(average_rating_calculated=Avg("reviews__rating"))
|
||||
.order_by("park_area__name", "name")
|
||||
)
|
||||
|
||||
7
backend/apps/rides/services/__init__.py
Normal file
7
backend/apps/rides/services/__init__.py
Normal file
@@ -0,0 +1,7 @@
|
||||
"""
|
||||
Services for the rides app.
|
||||
"""
|
||||
|
||||
from .ranking_service import RideRankingService
|
||||
|
||||
__all__ = ["RideRankingService"]
|
||||
550
backend/apps/rides/services/ranking_service.py
Normal file
550
backend/apps/rides/services/ranking_service.py
Normal file
@@ -0,0 +1,550 @@
|
||||
"""
|
||||
Service for calculating ride rankings using the Internet Roller Coaster Poll algorithm.
|
||||
|
||||
This service implements a pairwise comparison system where each ride is compared
|
||||
to every other ride based on mutual riders (users who have rated both rides).
|
||||
Rankings are determined by winning percentage in these comparisons.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import Dict, List, Tuple, Optional
|
||||
from decimal import Decimal
|
||||
from datetime import date
|
||||
|
||||
from django.db import transaction
|
||||
from django.db.models import Avg, Count, Q, F
|
||||
from django.utils import timezone
|
||||
|
||||
from apps.rides.models import (
|
||||
Ride,
|
||||
RideReview,
|
||||
RideRanking,
|
||||
RidePairComparison,
|
||||
RankingSnapshot,
|
||||
)
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class RideRankingService:
|
||||
"""
|
||||
Calculates ride rankings using the Internet Roller Coaster Poll algorithm.
|
||||
|
||||
Algorithm Overview:
|
||||
1. For each pair of rides, find users who have rated both
|
||||
2. Count how many users preferred each ride (higher rating)
|
||||
3. Calculate wins, losses, and ties for each ride
|
||||
4. Rank rides by winning percentage (ties count as 0.5 wins)
|
||||
5. Break ties by head-to-head comparison
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.logger = logging.getLogger(f"{__name__}.{self.__class__.__name__}")
|
||||
self.calculation_version = "1.0"
|
||||
|
||||
def update_all_rankings(self, category: Optional[str] = None) -> Dict[str, any]:
|
||||
"""
|
||||
Main entry point to update all ride rankings.
|
||||
|
||||
Args:
|
||||
category: Optional ride category to filter ('RC' for roller coasters, etc.)
|
||||
If None, ranks all rides.
|
||||
|
||||
Returns:
|
||||
Dictionary with statistics about the ranking calculation
|
||||
"""
|
||||
start_time = timezone.now()
|
||||
self.logger.info(
|
||||
f"Starting ranking calculation for category: {category or 'ALL'}"
|
||||
)
|
||||
|
||||
try:
|
||||
with transaction.atomic():
|
||||
# Get rides to rank
|
||||
rides = self._get_eligible_rides(category)
|
||||
if not rides:
|
||||
self.logger.warning("No eligible rides found for ranking")
|
||||
return {
|
||||
"status": "skipped",
|
||||
"message": "No eligible rides found",
|
||||
"duration": (timezone.now() - start_time).total_seconds(),
|
||||
}
|
||||
|
||||
self.logger.info(f"Found {len(rides)} rides to rank")
|
||||
|
||||
# Calculate pairwise comparisons
|
||||
comparisons = self._calculate_all_comparisons(rides)
|
||||
|
||||
# Calculate rankings from comparisons
|
||||
rankings = self._calculate_rankings_from_comparisons(rides, comparisons)
|
||||
|
||||
# Save rankings
|
||||
self._save_rankings(rankings)
|
||||
|
||||
# Save snapshots for historical tracking
|
||||
self._save_ranking_snapshots(rankings)
|
||||
|
||||
# Clean up old data
|
||||
self._cleanup_old_data()
|
||||
|
||||
duration = (timezone.now() - start_time).total_seconds()
|
||||
self.logger.info(
|
||||
f"Ranking calculation completed in {duration:.2f} seconds"
|
||||
)
|
||||
|
||||
return {
|
||||
"status": "success",
|
||||
"rides_ranked": len(rides),
|
||||
"comparisons_made": len(comparisons),
|
||||
"duration": duration,
|
||||
"timestamp": timezone.now(),
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error updating rankings: {e}", exc_info=True)
|
||||
raise
|
||||
|
||||
def _get_eligible_rides(self, category: Optional[str] = None) -> List[Ride]:
|
||||
"""
|
||||
Get rides that are eligible for ranking.
|
||||
|
||||
Only includes rides that:
|
||||
- Are currently operating
|
||||
- Have at least one review/rating
|
||||
"""
|
||||
queryset = (
|
||||
Ride.objects.filter(status="OPERATING", reviews__is_published=True)
|
||||
.annotate(
|
||||
review_count=Count("reviews", filter=Q(reviews__is_published=True))
|
||||
)
|
||||
.filter(review_count__gt=0)
|
||||
)
|
||||
|
||||
if category:
|
||||
queryset = queryset.filter(category=category)
|
||||
|
||||
return list(queryset.distinct())
|
||||
|
||||
def _calculate_all_comparisons(
|
||||
self, rides: List[Ride]
|
||||
) -> Dict[Tuple[int, int], RidePairComparison]:
|
||||
"""
|
||||
Calculate pairwise comparisons for all ride pairs.
|
||||
|
||||
Returns a dictionary keyed by (ride_a_id, ride_b_id) tuples.
|
||||
"""
|
||||
comparisons = {}
|
||||
total_pairs = len(rides) * (len(rides) - 1) // 2
|
||||
processed = 0
|
||||
|
||||
for i, ride_a in enumerate(rides):
|
||||
for ride_b in rides[i + 1 :]:
|
||||
comparison = self._calculate_pairwise_comparison(ride_a, ride_b)
|
||||
if comparison:
|
||||
# Store both directions for easy lookup
|
||||
comparisons[(ride_a.id, ride_b.id)] = comparison
|
||||
comparisons[(ride_b.id, ride_a.id)] = comparison
|
||||
|
||||
processed += 1
|
||||
if processed % 100 == 0:
|
||||
self.logger.debug(
|
||||
f"Processed {processed}/{total_pairs} comparisons"
|
||||
)
|
||||
|
||||
return comparisons
|
||||
|
||||
def _calculate_pairwise_comparison(
|
||||
self, ride_a: Ride, ride_b: Ride
|
||||
) -> Optional[RidePairComparison]:
|
||||
"""
|
||||
Calculate the pairwise comparison between two rides.
|
||||
|
||||
Finds users who have rated both rides and determines which ride
|
||||
they preferred based on their ratings.
|
||||
"""
|
||||
# Get mutual riders (users who have rated both rides)
|
||||
ride_a_reviewers = set(
|
||||
RideReview.objects.filter(ride=ride_a, is_published=True).values_list(
|
||||
"user_id", flat=True
|
||||
)
|
||||
)
|
||||
|
||||
ride_b_reviewers = set(
|
||||
RideReview.objects.filter(ride=ride_b, is_published=True).values_list(
|
||||
"user_id", flat=True
|
||||
)
|
||||
)
|
||||
|
||||
mutual_riders = ride_a_reviewers & ride_b_reviewers
|
||||
|
||||
if not mutual_riders:
|
||||
# No mutual riders, no comparison possible
|
||||
return None
|
||||
|
||||
# Get ratings from mutual riders
|
||||
ride_a_ratings = {
|
||||
review.user_id: review.rating
|
||||
for review in RideReview.objects.filter(
|
||||
ride=ride_a, user_id__in=mutual_riders, is_published=True
|
||||
)
|
||||
}
|
||||
|
||||
ride_b_ratings = {
|
||||
review.user_id: review.rating
|
||||
for review in RideReview.objects.filter(
|
||||
ride=ride_b, user_id__in=mutual_riders, is_published=True
|
||||
)
|
||||
}
|
||||
|
||||
# Count wins and ties
|
||||
ride_a_wins = 0
|
||||
ride_b_wins = 0
|
||||
ties = 0
|
||||
|
||||
for user_id in mutual_riders:
|
||||
rating_a = ride_a_ratings.get(user_id, 0)
|
||||
rating_b = ride_b_ratings.get(user_id, 0)
|
||||
|
||||
if rating_a > rating_b:
|
||||
ride_a_wins += 1
|
||||
elif rating_b > rating_a:
|
||||
ride_b_wins += 1
|
||||
else:
|
||||
ties += 1
|
||||
|
||||
# Calculate average ratings from mutual riders
|
||||
ride_a_avg = (
|
||||
sum(ride_a_ratings.values()) / len(ride_a_ratings) if ride_a_ratings else 0
|
||||
)
|
||||
ride_b_avg = (
|
||||
sum(ride_b_ratings.values()) / len(ride_b_ratings) if ride_b_ratings else 0
|
||||
)
|
||||
|
||||
# Create or update comparison record
|
||||
comparison, created = RidePairComparison.objects.update_or_create(
|
||||
ride_a=ride_a if ride_a.id < ride_b.id else ride_b,
|
||||
ride_b=ride_b if ride_a.id < ride_b.id else ride_a,
|
||||
defaults={
|
||||
"ride_a_wins": ride_a_wins if ride_a.id < ride_b.id else ride_b_wins,
|
||||
"ride_b_wins": ride_b_wins if ride_a.id < ride_b.id else ride_a_wins,
|
||||
"ties": ties,
|
||||
"mutual_riders_count": len(mutual_riders),
|
||||
"ride_a_avg_rating": (
|
||||
Decimal(str(ride_a_avg))
|
||||
if ride_a.id < ride_b.id
|
||||
else Decimal(str(ride_b_avg))
|
||||
),
|
||||
"ride_b_avg_rating": (
|
||||
Decimal(str(ride_b_avg))
|
||||
if ride_a.id < ride_b.id
|
||||
else Decimal(str(ride_a_avg))
|
||||
),
|
||||
},
|
||||
)
|
||||
|
||||
return comparison
|
||||
|
||||
def _calculate_rankings_from_comparisons(
|
||||
self, rides: List[Ride], comparisons: Dict[Tuple[int, int], RidePairComparison]
|
||||
) -> List[Dict]:
|
||||
"""
|
||||
Calculate final rankings from pairwise comparisons.
|
||||
|
||||
Returns a list of dictionaries containing ranking data for each ride.
|
||||
"""
|
||||
rankings = []
|
||||
|
||||
for ride in rides:
|
||||
wins = 0
|
||||
losses = 0
|
||||
ties = 0
|
||||
comparison_count = 0
|
||||
|
||||
# Count wins, losses, and ties
|
||||
for other_ride in rides:
|
||||
if ride.id == other_ride.id:
|
||||
continue
|
||||
|
||||
comparison_key = (
|
||||
min(ride.id, other_ride.id),
|
||||
max(ride.id, other_ride.id),
|
||||
)
|
||||
comparison = comparisons.get(comparison_key)
|
||||
|
||||
if not comparison:
|
||||
continue
|
||||
|
||||
comparison_count += 1
|
||||
|
||||
# Determine win/loss/tie for this ride
|
||||
if comparison.ride_a_id == ride.id:
|
||||
if comparison.ride_a_wins > comparison.ride_b_wins:
|
||||
wins += 1
|
||||
elif comparison.ride_a_wins < comparison.ride_b_wins:
|
||||
losses += 1
|
||||
else:
|
||||
ties += 1
|
||||
else: # ride_b_id == ride.id
|
||||
if comparison.ride_b_wins > comparison.ride_a_wins:
|
||||
wins += 1
|
||||
elif comparison.ride_b_wins < comparison.ride_a_wins:
|
||||
losses += 1
|
||||
else:
|
||||
ties += 1
|
||||
|
||||
# Calculate winning percentage (ties count as 0.5)
|
||||
total_comparisons = wins + losses + ties
|
||||
if total_comparisons > 0:
|
||||
winning_percentage = Decimal(
|
||||
str((wins + 0.5 * ties) / total_comparisons)
|
||||
)
|
||||
else:
|
||||
winning_percentage = Decimal("0.5")
|
||||
|
||||
# Get average rating and reviewer count
|
||||
ride_stats = RideReview.objects.filter(
|
||||
ride=ride, is_published=True
|
||||
).aggregate(
|
||||
avg_rating=Avg("rating"), reviewer_count=Count("user", distinct=True)
|
||||
)
|
||||
|
||||
rankings.append(
|
||||
{
|
||||
"ride": ride,
|
||||
"wins": wins,
|
||||
"losses": losses,
|
||||
"ties": ties,
|
||||
"winning_percentage": winning_percentage,
|
||||
"comparison_count": comparison_count,
|
||||
"average_rating": ride_stats["avg_rating"],
|
||||
"mutual_riders_count": ride_stats["reviewer_count"] or 0,
|
||||
}
|
||||
)
|
||||
|
||||
# Sort by winning percentage (descending), then by mutual riders count for ties
|
||||
rankings.sort(
|
||||
key=lambda x: (
|
||||
x["winning_percentage"],
|
||||
x["mutual_riders_count"],
|
||||
x["average_rating"] or 0,
|
||||
),
|
||||
reverse=True,
|
||||
)
|
||||
|
||||
# Handle tie-breaking with head-to-head comparisons
|
||||
rankings = self._apply_tiebreakers(rankings, comparisons)
|
||||
|
||||
# Assign final ranks
|
||||
for i, ranking_data in enumerate(rankings, 1):
|
||||
ranking_data["rank"] = i
|
||||
|
||||
return rankings
|
||||
|
||||
def _apply_tiebreakers(
|
||||
self,
|
||||
rankings: List[Dict],
|
||||
comparisons: Dict[Tuple[int, int], RidePairComparison],
|
||||
) -> List[Dict]:
|
||||
"""
|
||||
Apply head-to-head tiebreaker for rides with identical winning percentages.
|
||||
|
||||
If two rides have the same winning percentage, the one that beat the other
|
||||
in their head-to-head comparison gets the higher rank.
|
||||
"""
|
||||
i = 0
|
||||
while i < len(rankings) - 1:
|
||||
# Find rides with same winning percentage
|
||||
tied_group = [rankings[i]]
|
||||
j = i + 1
|
||||
|
||||
while (
|
||||
j < len(rankings)
|
||||
and rankings[j]["winning_percentage"]
|
||||
== rankings[i]["winning_percentage"]
|
||||
):
|
||||
tied_group.append(rankings[j])
|
||||
j += 1
|
||||
|
||||
if len(tied_group) > 1:
|
||||
# Apply head-to-head tiebreaker within the group
|
||||
tied_group = self._sort_tied_group(tied_group, comparisons)
|
||||
|
||||
# Replace the tied section with sorted group
|
||||
rankings[i:j] = tied_group
|
||||
|
||||
i = j
|
||||
|
||||
return rankings
|
||||
|
||||
def _sort_tied_group(
|
||||
self,
|
||||
tied_group: List[Dict],
|
||||
comparisons: Dict[Tuple[int, int], RidePairComparison],
|
||||
) -> List[Dict]:
|
||||
"""
|
||||
Sort a group of tied rides using head-to-head comparisons.
|
||||
"""
|
||||
# Create mini-rankings within the tied group
|
||||
for ride_data in tied_group:
|
||||
mini_wins = 0
|
||||
mini_losses = 0
|
||||
|
||||
for other_data in tied_group:
|
||||
if ride_data["ride"].id == other_data["ride"].id:
|
||||
continue
|
||||
|
||||
comparison_key = (
|
||||
min(ride_data["ride"].id, other_data["ride"].id),
|
||||
max(ride_data["ride"].id, other_data["ride"].id),
|
||||
)
|
||||
comparison = comparisons.get(comparison_key)
|
||||
|
||||
if comparison:
|
||||
if comparison.ride_a_id == ride_data["ride"].id:
|
||||
if comparison.ride_a_wins > comparison.ride_b_wins:
|
||||
mini_wins += 1
|
||||
elif comparison.ride_a_wins < comparison.ride_b_wins:
|
||||
mini_losses += 1
|
||||
else:
|
||||
if comparison.ride_b_wins > comparison.ride_a_wins:
|
||||
mini_wins += 1
|
||||
elif comparison.ride_b_wins < comparison.ride_a_wins:
|
||||
mini_losses += 1
|
||||
|
||||
ride_data["tiebreaker_score"] = mini_wins - mini_losses
|
||||
|
||||
# Sort by tiebreaker score, then by mutual riders count, then by average rating
|
||||
tied_group.sort(
|
||||
key=lambda x: (
|
||||
x["tiebreaker_score"],
|
||||
x["mutual_riders_count"],
|
||||
x["average_rating"] or 0,
|
||||
),
|
||||
reverse=True,
|
||||
)
|
||||
|
||||
return tied_group
|
||||
|
||||
def _save_rankings(self, rankings: List[Dict]):
|
||||
"""Save calculated rankings to the database."""
|
||||
for ranking_data in rankings:
|
||||
RideRanking.objects.update_or_create(
|
||||
ride=ranking_data["ride"],
|
||||
defaults={
|
||||
"rank": ranking_data["rank"],
|
||||
"wins": ranking_data["wins"],
|
||||
"losses": ranking_data["losses"],
|
||||
"ties": ranking_data["ties"],
|
||||
"winning_percentage": ranking_data["winning_percentage"],
|
||||
"mutual_riders_count": ranking_data["mutual_riders_count"],
|
||||
"comparison_count": ranking_data["comparison_count"],
|
||||
"average_rating": ranking_data["average_rating"],
|
||||
"last_calculated": timezone.now(),
|
||||
"calculation_version": self.calculation_version,
|
||||
},
|
||||
)
|
||||
|
||||
def _save_ranking_snapshots(self, rankings: List[Dict]):
|
||||
"""Save ranking snapshots for historical tracking."""
|
||||
today = date.today()
|
||||
|
||||
for ranking_data in rankings:
|
||||
RankingSnapshot.objects.update_or_create(
|
||||
ride=ranking_data["ride"],
|
||||
snapshot_date=today,
|
||||
defaults={
|
||||
"rank": ranking_data["rank"],
|
||||
"winning_percentage": ranking_data["winning_percentage"],
|
||||
},
|
||||
)
|
||||
|
||||
def _cleanup_old_data(self, days_to_keep: int = 365):
|
||||
"""Clean up old comparison and snapshot data."""
|
||||
cutoff_date = timezone.now() - timezone.timedelta(days=days_to_keep)
|
||||
|
||||
# Delete old snapshots
|
||||
deleted_snapshots = RankingSnapshot.objects.filter(
|
||||
snapshot_date__lt=cutoff_date.date()
|
||||
).delete()
|
||||
|
||||
if deleted_snapshots[0] > 0:
|
||||
self.logger.info(f"Deleted {deleted_snapshots[0]} old ranking snapshots")
|
||||
|
||||
def get_ride_ranking_details(self, ride: Ride) -> Optional[Dict]:
|
||||
"""
|
||||
Get detailed ranking information for a specific ride.
|
||||
|
||||
Returns dictionary with ranking details or None if not ranked.
|
||||
"""
|
||||
try:
|
||||
ranking = RideRanking.objects.get(ride=ride)
|
||||
|
||||
# Get recent head-to-head comparisons
|
||||
comparisons = (
|
||||
RidePairComparison.objects.filter(Q(ride_a=ride) | Q(ride_b=ride))
|
||||
.select_related("ride_a", "ride_b")
|
||||
.order_by("-mutual_riders_count")[:10]
|
||||
)
|
||||
|
||||
# Get ranking history
|
||||
history = RankingSnapshot.objects.filter(ride=ride).order_by(
|
||||
"-snapshot_date"
|
||||
)[:30]
|
||||
|
||||
return {
|
||||
"current_rank": ranking.rank,
|
||||
"winning_percentage": ranking.winning_percentage,
|
||||
"wins": ranking.wins,
|
||||
"losses": ranking.losses,
|
||||
"ties": ranking.ties,
|
||||
"average_rating": ranking.average_rating,
|
||||
"mutual_riders_count": ranking.mutual_riders_count,
|
||||
"last_calculated": ranking.last_calculated,
|
||||
"head_to_head": [
|
||||
{
|
||||
"opponent": (
|
||||
comp.ride_b if comp.ride_a_id == ride.id else comp.ride_a
|
||||
),
|
||||
"result": (
|
||||
"win"
|
||||
if (
|
||||
(
|
||||
comp.ride_a_id == ride.id
|
||||
and comp.ride_a_wins > comp.ride_b_wins
|
||||
)
|
||||
or (
|
||||
comp.ride_b_id == ride.id
|
||||
and comp.ride_b_wins > comp.ride_a_wins
|
||||
)
|
||||
)
|
||||
else (
|
||||
"loss"
|
||||
if (
|
||||
(
|
||||
comp.ride_a_id == ride.id
|
||||
and comp.ride_a_wins < comp.ride_b_wins
|
||||
)
|
||||
or (
|
||||
comp.ride_b_id == ride.id
|
||||
and comp.ride_b_wins < comp.ride_a_wins
|
||||
)
|
||||
)
|
||||
else "tie"
|
||||
)
|
||||
),
|
||||
"mutual_riders": comp.mutual_riders_count,
|
||||
}
|
||||
for comp in comparisons
|
||||
],
|
||||
"ranking_history": [
|
||||
{
|
||||
"date": snapshot.snapshot_date,
|
||||
"rank": snapshot.rank,
|
||||
"winning_percentage": snapshot.winning_percentage,
|
||||
}
|
||||
for snapshot in history
|
||||
],
|
||||
}
|
||||
except RideRanking.DoesNotExist:
|
||||
return None
|
||||
@@ -53,6 +53,23 @@ urlpatterns = [
|
||||
views.get_search_suggestions,
|
||||
name="search_suggestions",
|
||||
),
|
||||
# Ranking endpoints
|
||||
path("rankings/", views.RideRankingsView.as_view(), name="rankings"),
|
||||
path(
|
||||
"rankings/<slug:ride_slug>/",
|
||||
views.RideRankingDetailView.as_view(),
|
||||
name="ranking_detail",
|
||||
),
|
||||
path(
|
||||
"rankings/<slug:ride_slug>/history-chart/",
|
||||
views.ranking_history_chart,
|
||||
name="ranking_history_chart",
|
||||
),
|
||||
path(
|
||||
"rankings/<slug:ride_slug>/comparisons/",
|
||||
views.ranking_comparisons,
|
||||
name="ranking_comparisons",
|
||||
),
|
||||
# Park-specific URLs
|
||||
path("create/", views.RideCreateView.as_view(), name="ride_create"),
|
||||
path("<slug:ride_slug>/", views.RideDetailView.as_view(), name="ride_detail"),
|
||||
|
||||
@@ -12,6 +12,8 @@ from .forms import RideForm, RideSearchForm
|
||||
from apps.parks.models import Park
|
||||
from apps.moderation.mixins import EditSubmissionMixin, HistoryMixin
|
||||
from apps.moderation.models import EditSubmission
|
||||
from .models.rankings import RideRanking, RankingSnapshot
|
||||
from .services.ranking_service import RideRankingService
|
||||
|
||||
|
||||
class ParkContextRequired:
|
||||
@@ -452,3 +454,166 @@ class RideSearchView(ListView):
|
||||
context = super().get_context_data(**kwargs)
|
||||
context["search_form"] = RideSearchForm(self.request.GET)
|
||||
return context
|
||||
|
||||
|
||||
class RideRankingsView(ListView):
|
||||
"""View for displaying ride rankings using the Internet Roller Coaster Poll algorithm."""
|
||||
|
||||
model = RideRanking
|
||||
template_name = "rides/rankings.html"
|
||||
context_object_name = "rankings"
|
||||
paginate_by = 50
|
||||
|
||||
def get_queryset(self):
|
||||
"""Get rankings with optimized queries."""
|
||||
queryset = RideRanking.objects.select_related(
|
||||
"ride", "ride__park", "ride__manufacturer", "ride__ride_model"
|
||||
).order_by("rank")
|
||||
|
||||
# Filter by category if specified
|
||||
category = self.request.GET.get("category")
|
||||
if category and category != "all":
|
||||
queryset = queryset.filter(ride__category=category)
|
||||
|
||||
# Filter by minimum mutual riders
|
||||
min_riders = self.request.GET.get("min_riders")
|
||||
if min_riders:
|
||||
try:
|
||||
min_riders = int(min_riders)
|
||||
queryset = queryset.filter(mutual_riders_count__gte=min_riders)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
return queryset
|
||||
|
||||
def get_template_names(self):
|
||||
"""Return appropriate template based on request type."""
|
||||
if self.request.htmx:
|
||||
return ["rides/partials/rankings_table.html"]
|
||||
return [self.template_name]
|
||||
|
||||
def get_context_data(self, **kwargs):
|
||||
"""Add context for rankings view."""
|
||||
context = super().get_context_data(**kwargs)
|
||||
context["category_choices"] = Categories
|
||||
context["selected_category"] = self.request.GET.get("category", "all")
|
||||
context["min_riders"] = self.request.GET.get("min_riders", "")
|
||||
|
||||
# Add statistics
|
||||
if self.object_list:
|
||||
context["total_ranked"] = RideRanking.objects.count()
|
||||
context["last_updated"] = (
|
||||
self.object_list[0].last_calculated if self.object_list else None
|
||||
)
|
||||
|
||||
return context
|
||||
|
||||
|
||||
class RideRankingDetailView(DetailView):
|
||||
"""View for displaying detailed ranking information for a specific ride."""
|
||||
|
||||
model = Ride
|
||||
template_name = "rides/ranking_detail.html"
|
||||
slug_url_kwarg = "ride_slug"
|
||||
|
||||
def get_queryset(self):
|
||||
"""Get ride with ranking data."""
|
||||
return Ride.objects.select_related(
|
||||
"park", "manufacturer", "ranking"
|
||||
).prefetch_related("comparisons_as_a", "comparisons_as_b", "ranking_history")
|
||||
|
||||
def get_context_data(self, **kwargs):
|
||||
"""Add ranking details to context."""
|
||||
context = super().get_context_data(**kwargs)
|
||||
|
||||
# Get ranking details from service
|
||||
service = RideRankingService()
|
||||
ranking_details = service.get_ride_ranking_details(self.object)
|
||||
|
||||
if ranking_details:
|
||||
context.update(ranking_details)
|
||||
|
||||
# Get recent movement
|
||||
recent_snapshots = RankingSnapshot.objects.filter(
|
||||
ride=self.object
|
||||
).order_by("-snapshot_date")[:7]
|
||||
|
||||
if len(recent_snapshots) >= 2:
|
||||
context["rank_change"] = (
|
||||
recent_snapshots[0].rank - recent_snapshots[1].rank
|
||||
)
|
||||
context["previous_rank"] = recent_snapshots[1].rank
|
||||
else:
|
||||
context["not_ranked"] = True
|
||||
|
||||
return context
|
||||
|
||||
|
||||
def ranking_history_chart(request: HttpRequest, ride_slug: str) -> HttpResponse:
|
||||
"""HTMX endpoint for ranking history chart data."""
|
||||
ride = get_object_or_404(Ride, slug=ride_slug)
|
||||
|
||||
# Get last 30 days of ranking history
|
||||
history = RankingSnapshot.objects.filter(ride=ride).order_by("-snapshot_date")[:30]
|
||||
|
||||
# Prepare data for chart
|
||||
chart_data = [
|
||||
{
|
||||
"date": snapshot.snapshot_date.isoformat(),
|
||||
"rank": snapshot.rank,
|
||||
"win_pct": float(snapshot.winning_percentage) * 100,
|
||||
}
|
||||
for snapshot in reversed(history)
|
||||
]
|
||||
|
||||
return render(
|
||||
request,
|
||||
"rides/partials/ranking_chart.html",
|
||||
{"chart_data": chart_data, "ride": ride},
|
||||
)
|
||||
|
||||
|
||||
def ranking_comparisons(request: HttpRequest, ride_slug: str) -> HttpResponse:
|
||||
"""HTMX endpoint for ride head-to-head comparisons."""
|
||||
ride = get_object_or_404(Ride, slug=ride_slug)
|
||||
|
||||
# Get head-to-head comparisons
|
||||
from django.db.models import Q
|
||||
from .models.rankings import RidePairComparison
|
||||
|
||||
comparisons = (
|
||||
RidePairComparison.objects.filter(Q(ride_a=ride) | Q(ride_b=ride))
|
||||
.select_related("ride_a", "ride_b", "ride_a__park", "ride_b__park")
|
||||
.order_by("-mutual_riders_count")[:20]
|
||||
)
|
||||
|
||||
# Format comparisons for display
|
||||
comparison_data = []
|
||||
for comp in comparisons:
|
||||
if comp.ride_a == ride:
|
||||
opponent = comp.ride_b
|
||||
wins = comp.ride_a_wins
|
||||
losses = comp.ride_b_wins
|
||||
else:
|
||||
opponent = comp.ride_a
|
||||
wins = comp.ride_b_wins
|
||||
losses = comp.ride_a_wins
|
||||
|
||||
result = "win" if wins > losses else "loss" if losses > wins else "tie"
|
||||
|
||||
comparison_data.append(
|
||||
{
|
||||
"opponent": opponent,
|
||||
"wins": wins,
|
||||
"losses": losses,
|
||||
"ties": comp.ties,
|
||||
"result": result,
|
||||
"mutual_riders": comp.mutual_riders_count,
|
||||
}
|
||||
)
|
||||
|
||||
return render(
|
||||
request,
|
||||
"rides/partials/ranking_comparisons.html",
|
||||
{"comparisons": comparison_data, "ride": ride},
|
||||
)
|
||||
|
||||
@@ -119,7 +119,6 @@ MIDDLEWARE = [
|
||||
"allauth.account.middleware.AccountMiddleware",
|
||||
"django.middleware.cache.FetchFromCacheMiddleware",
|
||||
"django_htmx.middleware.HtmxMiddleware",
|
||||
"core.middleware.PageViewMiddleware", # Add our page view tracking
|
||||
]
|
||||
|
||||
ROOT_URLCONF = "thrillwiki.urls"
|
||||
@@ -299,9 +298,14 @@ SPECTACULAR_SETTINGS = {
|
||||
"TAGS": [
|
||||
{"name": "Parks", "description": "Theme park operations"},
|
||||
{"name": "Rides", "description": "Ride information and management"},
|
||||
{"name": "Statistics",
|
||||
"description": "Statistical endpoints providing aggregated data and insights"},
|
||||
{"name": "Reviews", "description": "User reviews and ratings for parks and rides"},
|
||||
{
|
||||
"name": "Statistics",
|
||||
"description": "Statistical endpoints providing aggregated data and insights",
|
||||
},
|
||||
{
|
||||
"name": "Reviews",
|
||||
"description": "User reviews and ratings for parks and rides",
|
||||
},
|
||||
{"name": "locations", "description": "Geographic location services"},
|
||||
{"name": "accounts", "description": "User account management"},
|
||||
{"name": "media", "description": "Media and image management"},
|
||||
|
||||
@@ -5,6 +5,7 @@ Local development settings for thrillwiki project.
|
||||
import logging
|
||||
from .base import *
|
||||
from ..settings import database
|
||||
from pythonjsonlogger import jsonlogger
|
||||
|
||||
# Import the module and use its members, e.g., email.EMAIL_HOST
|
||||
|
||||
|
||||
@@ -53,6 +53,7 @@ dependencies = [
|
||||
"django-extensions>=4.1",
|
||||
"werkzeug>=3.1.3",
|
||||
"django-widget-tweaks>=1.5.0",
|
||||
"redis>=6.4.0",
|
||||
]
|
||||
|
||||
[dependency-groups]
|
||||
@@ -70,3 +71,6 @@ typeCheckingMode = "basic"
|
||||
|
||||
[tool.pylance]
|
||||
stubPath = "stubs"
|
||||
|
||||
[tool.uv.sources]
|
||||
python-json-logger = { url = "https://github.com/nhairs/python-json-logger/releases/download/v3.0.0/python_json_logger-3.0.0-py3-none-any.whl" }
|
||||
|
||||
@@ -17,22 +17,22 @@ print("Cleared existing social apps")
|
||||
|
||||
# Create Google social app
|
||||
google_app = SocialApp.objects.create(
|
||||
provider='google',
|
||||
name='Google',
|
||||
client_id='demo-google-client-id.apps.googleusercontent.com',
|
||||
secret='demo-google-client-secret',
|
||||
key='', # Not used for Google
|
||||
provider="google",
|
||||
name="Google",
|
||||
client_id="demo-google-client-id.apps.googleusercontent.com",
|
||||
secret="demo-google-client-secret",
|
||||
key="", # Not used for Google
|
||||
)
|
||||
google_app.sites.add(site)
|
||||
print("✅ Created Google social app")
|
||||
|
||||
# Create Discord social app
|
||||
discord_app = SocialApp.objects.create(
|
||||
provider='discord',
|
||||
name='Discord',
|
||||
client_id='demo-discord-client-id',
|
||||
secret='demo-discord-client-secret',
|
||||
key='', # Not used for Discord
|
||||
provider="discord",
|
||||
name="Discord",
|
||||
client_id="demo-discord-client-id",
|
||||
secret="demo-discord-client-secret",
|
||||
key="", # Not used for Discord
|
||||
)
|
||||
discord_app.sites.add(site)
|
||||
print("✅ Created Discord social app")
|
||||
|
||||
@@ -77,7 +77,7 @@ MIDDLEWARE = [
|
||||
"allauth.account.middleware.AccountMiddleware",
|
||||
"django.middleware.cache.FetchFromCacheMiddleware",
|
||||
"django_htmx.middleware.HtmxMiddleware",
|
||||
"apps.core.middleware.PageViewMiddleware", # Add our page view tracking
|
||||
"apps.core.middleware.view_tracking.ViewTrackingMiddleware", # Add our page view tracking
|
||||
]
|
||||
|
||||
ROOT_URLCONF = "thrillwiki.urls"
|
||||
@@ -119,13 +119,18 @@ DATABASES = {
|
||||
# Cache settings
|
||||
CACHES = {
|
||||
"default": {
|
||||
"BACKEND": "django.core.cache.backends.locmem.LocMemCache",
|
||||
"LOCATION": "unique-snowflake",
|
||||
"BACKEND": "django_redis.cache.RedisCache",
|
||||
"LOCATION": "redis://127.0.0.1:6379/1",
|
||||
"OPTIONS": {
|
||||
"CLIENT_CLASS": "django_redis.client.DefaultClient",
|
||||
},
|
||||
"TIMEOUT": 300, # 5 minutes
|
||||
"OPTIONS": {"MAX_ENTRIES": 1000},
|
||||
}
|
||||
}
|
||||
|
||||
# Redis settings for trending cache
|
||||
REDIS_URL = "redis://127.0.0.1:6379/0"
|
||||
|
||||
CACHE_MIDDLEWARE_SECONDS = 1 # 5 minutes
|
||||
CACHE_MIDDLEWARE_KEY_PREFIX = "thrillwiki"
|
||||
|
||||
27
backend/uv.lock
generated
27
backend/uv.lock
generated
@@ -1548,13 +1548,22 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "python-json-logger"
|
||||
version = "3.3.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/9e/de/d3144a0bceede957f961e975f3752760fbe390d57fbe194baf709d8f1f7b/python_json_logger-3.3.0.tar.gz", hash = "sha256:12b7e74b17775e7d565129296105bbe3910842d9d0eb083fc83a6a617aa8df84", size = 16642, upload-time = "2025-03-07T07:08:27.301Z" }
|
||||
version = "3.0.0"
|
||||
source = { url = "https://github.com/nhairs/python-json-logger/releases/download/v3.0.0/python_json_logger-3.0.0-py3-none-any.whl" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/08/20/0f2523b9e50a8052bc6a8b732dfc8568abbdc42010aef03a2d750bdab3b2/python_json_logger-3.3.0-py3-none-any.whl", hash = "sha256:dd980fae8cffb24c13caf6e158d3d61c0d6d22342f932cb6e9deedab3d35eec7", size = 15163, upload-time = "2025-03-07T07:08:25.627Z" },
|
||||
{ url = "https://github.com/nhairs/python-json-logger/releases/download/v3.0.0/python_json_logger-3.0.0-py3-none-any.whl", hash = "sha256:45c59c69d4a4e398b37e77c6b6f0f1663c829516a5063ff4c2bc0ba314b1f6f7" },
|
||||
]
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "black", marker = "extra == 'lint'" },
|
||||
{ name = "mypy", marker = "extra == 'lint'" },
|
||||
{ name = "pylint", marker = "extra == 'lint'" },
|
||||
{ name = "pytest", marker = "extra == 'test'" },
|
||||
{ name = "validate-pyproject", extras = ["all"], marker = "extra == 'lint'" },
|
||||
]
|
||||
provides-extras = ["lint", "test"]
|
||||
|
||||
[[package]]
|
||||
name = "python-slugify"
|
||||
version = "8.0.4"
|
||||
@@ -1914,6 +1923,7 @@ dependencies = [
|
||||
{ name = "pytest-playwright" },
|
||||
{ name = "python-dotenv" },
|
||||
{ name = "python-json-logger" },
|
||||
{ name = "redis" },
|
||||
{ name = "requests" },
|
||||
{ name = "sentry-sdk" },
|
||||
{ name = "werkzeug" },
|
||||
@@ -1975,7 +1985,8 @@ requires-dist = [
|
||||
{ name = "pytest-django", specifier = ">=4.9.0" },
|
||||
{ name = "pytest-playwright", specifier = ">=0.4.3" },
|
||||
{ name = "python-dotenv", specifier = ">=1.0.1" },
|
||||
{ name = "python-json-logger", specifier = ">=2.0.7" },
|
||||
{ name = "python-json-logger", url = "https://github.com/nhairs/python-json-logger/releases/download/v3.0.0/python_json_logger-3.0.0-py3-none-any.whl" },
|
||||
{ name = "redis", specifier = ">=6.4.0" },
|
||||
{ name = "requests", specifier = ">=2.32.3" },
|
||||
{ name = "sentry-sdk", specifier = ">=1.40.0" },
|
||||
{ name = "werkzeug", specifier = ">=3.1.3" },
|
||||
@@ -2067,11 +2078,11 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "typing-extensions"
|
||||
version = "4.14.1"
|
||||
version = "4.15.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/98/5a/da40306b885cc8c09109dc2e1abd358d5684b1425678151cdaed4731c822/typing_extensions-4.14.1.tar.gz", hash = "sha256:38b39f4aeeab64884ce9f74c94263ef78f3c22467c8724005483154c26648d36", size = 107673, upload-time = "2025-07-04T13:28:34.16Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/72/94/1a15dd82efb362ac84269196e94cf00f187f7ed21c242792a923cdb1c61f/typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466", size = 109391, upload-time = "2025-08-25T13:49:26.313Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/b5/00/d631e67a838026495268c2f6884f3711a15a9a2a96cd244fdaea53b823fb/typing_extensions-4.14.1-py3-none-any.whl", hash = "sha256:d1e1e3b58374dc93031d6eda2420a48ea44a36c2b4766a4fdeb3710755731d76", size = 43906, upload-time = "2025-07-04T13:28:32.743Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
716
docs/ride-ranking-implementation.md
Normal file
716
docs/ride-ranking-implementation.md
Normal file
@@ -0,0 +1,716 @@
|
||||
# Ride Ranking System - Complete Implementation Documentation
|
||||
|
||||
## Table of Contents
|
||||
1. [Overview](#overview)
|
||||
2. [Backend Implementation](#backend-implementation)
|
||||
3. [Frontend Implementation](#frontend-implementation)
|
||||
4. [API Reference](#api-reference)
|
||||
5. [Usage Examples](#usage-examples)
|
||||
6. [Deployment & Maintenance](#deployment--maintenance)
|
||||
|
||||
## Overview
|
||||
|
||||
The ThrillWiki Ride Ranking System implements the Internet Roller Coaster Poll (IRCP) algorithm to provide fair, data-driven rankings of theme park rides based on user ratings. This document covers the complete implementation across both backend (Django) and frontend (Vue.js/TypeScript) components.
|
||||
|
||||
### Key Features
|
||||
- **Pairwise Comparison Algorithm**: Compares every ride against every other ride based on mutual riders
|
||||
- **Web Interface**: Browse rankings with filtering and detailed views
|
||||
- **REST API**: Comprehensive API for programmatic access
|
||||
- **Historical Tracking**: Track ranking changes over time
|
||||
- **Statistical Analysis**: Head-to-head comparisons and win/loss records
|
||||
|
||||
## Backend Implementation
|
||||
|
||||
### Database Models
|
||||
|
||||
#### Location: `apps/rides/models/rankings.py`
|
||||
|
||||
```python
|
||||
# Core ranking models
|
||||
- RideRanking: Current ranking data for each ride
|
||||
- RidePairComparison: Cached pairwise comparison results
|
||||
- RankingSnapshot: Historical ranking data
|
||||
```
|
||||
|
||||
#### Key Fields
|
||||
|
||||
**RideRanking Model**:
|
||||
- `rank` (Integer): Overall ranking position
|
||||
- `wins` (Integer): Number of head-to-head wins
|
||||
- `losses` (Integer): Number of head-to-head losses
|
||||
- `ties` (Integer): Number of tied comparisons
|
||||
- `winning_percentage` (Decimal): Win percentage (ties count as 0.5)
|
||||
- `mutual_riders_count` (Integer): Total users who rated this ride
|
||||
- `average_rating` (Decimal): Average user rating
|
||||
- `last_calculated` (DateTime): Timestamp of last calculation
|
||||
|
||||
### Service Layer
|
||||
|
||||
#### Location: `apps/rides/services/ranking_service.py`
|
||||
|
||||
The `RideRankingService` class implements the core ranking algorithm:
|
||||
|
||||
```python
|
||||
class RideRankingService:
|
||||
def update_all_rankings(category=None):
|
||||
"""Main entry point for ranking calculation"""
|
||||
|
||||
def _calculate_pairwise_comparison(ride_a, ride_b):
|
||||
"""Compare two rides based on mutual riders"""
|
||||
|
||||
def _calculate_rankings_from_comparisons():
|
||||
"""Convert comparisons to rankings"""
|
||||
|
||||
def _apply_tiebreakers():
|
||||
"""Resolve ties using head-to-head comparisons"""
|
||||
```
|
||||
|
||||
### Django Views
|
||||
|
||||
#### Location: `apps/rides/views.py`
|
||||
|
||||
**Web Views**:
|
||||
```python
|
||||
class RideRankingsView(ListView):
|
||||
"""Main rankings list page with filtering"""
|
||||
template_name = 'rides/rankings.html'
|
||||
paginate_by = 50
|
||||
|
||||
class RideRankingDetailView(DetailView):
|
||||
"""Detailed ranking view for a specific ride"""
|
||||
template_name = 'rides/ranking_detail.html'
|
||||
```
|
||||
|
||||
**HTMX Endpoints**:
|
||||
- `ranking_history_chart`: Returns chart data for ranking history
|
||||
- `ranking_comparisons`: Returns head-to-head comparison data
|
||||
|
||||
### URL Configuration
|
||||
|
||||
#### Location: `apps/rides/urls.py`
|
||||
|
||||
```python
|
||||
urlpatterns = [
|
||||
path('rankings/', RideRankingsView.as_view(), name='ride-rankings'),
|
||||
path('rankings/<slug:ride_slug>/', RideRankingDetailView.as_view(), name='ride-ranking-detail'),
|
||||
path('rankings/<slug:ride_slug>/history-chart/', ranking_history_chart, name='ranking-history-chart'),
|
||||
path('rankings/<slug:ride_slug>/comparisons/', ranking_comparisons, name='ranking-comparisons'),
|
||||
]
|
||||
```
|
||||
|
||||
### API Implementation
|
||||
|
||||
#### Serializers
|
||||
**Location**: `apps/api/v1/serializers_rankings.py`
|
||||
|
||||
```python
|
||||
class RideRankingSerializer(serializers.ModelSerializer):
|
||||
"""Basic ranking data serialization"""
|
||||
|
||||
class RideRankingDetailSerializer(serializers.ModelSerializer):
|
||||
"""Detailed ranking with relationships"""
|
||||
|
||||
class RankingSnapshotSerializer(serializers.ModelSerializer):
|
||||
"""Historical ranking data"""
|
||||
|
||||
class RankingStatsSerializer(serializers.Serializer):
|
||||
"""System-wide statistics"""
|
||||
```
|
||||
|
||||
#### ViewSets
|
||||
**Location**: `apps/api/v1/viewsets_rankings.py`
|
||||
|
||||
```python
|
||||
class RideRankingViewSet(viewsets.ReadOnlyModelViewSet):
|
||||
"""
|
||||
REST API endpoint for ride rankings
|
||||
Supports filtering, ordering, and custom actions
|
||||
"""
|
||||
|
||||
@action(detail=True, methods=['get'])
|
||||
def history(self, request, pk=None):
|
||||
"""Get historical ranking data"""
|
||||
|
||||
@action(detail=True, methods=['get'])
|
||||
def comparisons(self, request, pk=None):
|
||||
"""Get head-to-head comparisons"""
|
||||
|
||||
@action(detail=False, methods=['get'])
|
||||
def statistics(self, request):
|
||||
"""Get system-wide statistics"""
|
||||
|
||||
class TriggerRankingCalculationView(APIView):
|
||||
"""Admin endpoint to trigger manual calculation"""
|
||||
```
|
||||
|
||||
#### API URLs
|
||||
**Location**: `apps/api/v1/urls.py`
|
||||
|
||||
```python
|
||||
router.register(r'rankings', RideRankingViewSet, basename='ranking')
|
||||
|
||||
urlpatterns = [
|
||||
path('', include(router.urls)),
|
||||
path('rankings/calculate/', TriggerRankingCalculationView.as_view()),
|
||||
]
|
||||
```
|
||||
|
||||
### Management Commands
|
||||
|
||||
#### Location: `apps/rides/management/commands/update_ride_rankings.py`
|
||||
|
||||
```bash
|
||||
# Update all rankings
|
||||
python manage.py update_ride_rankings
|
||||
|
||||
# Update specific category
|
||||
python manage.py update_ride_rankings --category RC
|
||||
```
|
||||
|
||||
### Admin Interface
|
||||
|
||||
#### Location: `apps/rides/admin.py`
|
||||
|
||||
```python
|
||||
@admin.register(RideRanking)
|
||||
class RideRankingAdmin(admin.ModelAdmin):
|
||||
list_display = ['rank', 'ride', 'winning_percentage', 'wins', 'losses']
|
||||
list_filter = ['ride__category', 'last_calculated']
|
||||
search_fields = ['ride__name']
|
||||
ordering = ['rank']
|
||||
```
|
||||
|
||||
## Frontend Implementation
|
||||
|
||||
### TypeScript Type Definitions
|
||||
|
||||
#### Location: `frontend/src/types/index.ts`
|
||||
|
||||
```typescript
|
||||
// Core ranking types
|
||||
export interface RideRanking {
|
||||
id: number
|
||||
rank: number
|
||||
ride: {
|
||||
id: number
|
||||
name: string
|
||||
slug: string
|
||||
park: {
|
||||
id: number
|
||||
name: string
|
||||
slug: string
|
||||
}
|
||||
category: 'RC' | 'DR' | 'FR' | 'WR' | 'TR' | 'OT'
|
||||
}
|
||||
wins: number
|
||||
losses: number
|
||||
ties: number
|
||||
winning_percentage: number
|
||||
mutual_riders_count: number
|
||||
comparison_count: number
|
||||
average_rating: number
|
||||
last_calculated: string
|
||||
rank_change?: number
|
||||
previous_rank?: number | null
|
||||
}
|
||||
|
||||
export interface RideRankingDetail extends RideRanking {
|
||||
ride: {
|
||||
// Extended ride information
|
||||
description?: string
|
||||
manufacturer?: { id: number; name: string }
|
||||
opening_date?: string
|
||||
status: string
|
||||
}
|
||||
calculation_version?: string
|
||||
head_to_head_comparisons?: HeadToHeadComparison[]
|
||||
ranking_history?: RankingSnapshot[]
|
||||
}
|
||||
|
||||
export interface HeadToHeadComparison {
|
||||
opponent: {
|
||||
id: number
|
||||
name: string
|
||||
slug: string
|
||||
park: string
|
||||
}
|
||||
wins: number
|
||||
losses: number
|
||||
ties: number
|
||||
result: 'win' | 'loss' | 'tie'
|
||||
mutual_riders: number
|
||||
}
|
||||
|
||||
export interface RankingSnapshot {
|
||||
date: string
|
||||
rank: number
|
||||
winning_percentage: number
|
||||
}
|
||||
|
||||
export interface RankingStatistics {
|
||||
total_ranked_rides: number
|
||||
total_comparisons: number
|
||||
last_calculation_time: string
|
||||
calculation_duration: number
|
||||
top_rated_ride?: RideInfo
|
||||
most_compared_ride?: RideInfo
|
||||
biggest_rank_change?: RankChangeInfo
|
||||
}
|
||||
```
|
||||
|
||||
### API Service Class
|
||||
|
||||
#### Location: `frontend/src/services/api.ts`
|
||||
|
||||
```typescript
|
||||
export class RankingsApi {
|
||||
// Core API methods
|
||||
async getRankings(params?: RankingParams): Promise<ApiResponse<RideRanking>>
|
||||
async getRankingDetail(rideSlug: string): Promise<RideRankingDetail>
|
||||
async getRankingHistory(rideSlug: string): Promise<RankingSnapshot[]>
|
||||
async getHeadToHeadComparisons(rideSlug: string): Promise<HeadToHeadComparison[]>
|
||||
async getRankingStatistics(): Promise<RankingStatistics>
|
||||
async calculateRankings(category?: string): Promise<CalculationResult>
|
||||
|
||||
// Convenience methods
|
||||
async getTopRankings(limit: number, category?: string): Promise<RideRanking[]>
|
||||
async getParkRankings(parkSlug: string, params?: Params): Promise<ApiResponse<RideRanking>>
|
||||
async searchRankings(query: string): Promise<RideRanking[]>
|
||||
async getRankChange(rideSlug: string): Promise<RankChangeInfo>
|
||||
}
|
||||
```
|
||||
|
||||
### Integration with Main API
|
||||
|
||||
```typescript
|
||||
// Singleton instance with all API services
|
||||
export const api = new ThrillWikiApi()
|
||||
|
||||
// Direct access to rankings API
|
||||
export const rankingsApi = api.rankings
|
||||
```
|
||||
|
||||
## API Reference
|
||||
|
||||
### REST Endpoints
|
||||
|
||||
#### Get Rankings List
|
||||
```http
|
||||
GET /api/v1/rankings/
|
||||
```
|
||||
|
||||
**Query Parameters**:
|
||||
- `page` (integer): Page number
|
||||
- `page_size` (integer): Results per page (default: 20)
|
||||
- `category` (string): Filter by category (RC, DR, FR, WR, TR, OT)
|
||||
- `min_riders` (integer): Minimum mutual riders
|
||||
- `park` (string): Filter by park slug
|
||||
- `ordering` (string): Sort order (rank, -rank, winning_percentage, -winning_percentage)
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"count": 523,
|
||||
"next": "http://api.example.com/api/v1/rankings/?page=2",
|
||||
"previous": null,
|
||||
"results": [
|
||||
{
|
||||
"id": 1,
|
||||
"rank": 1,
|
||||
"ride": {
|
||||
"id": 123,
|
||||
"name": "Steel Vengeance",
|
||||
"slug": "steel-vengeance",
|
||||
"park": {
|
||||
"id": 45,
|
||||
"name": "Cedar Point",
|
||||
"slug": "cedar-point"
|
||||
},
|
||||
"category": "RC"
|
||||
},
|
||||
"wins": 487,
|
||||
"losses": 23,
|
||||
"ties": 13,
|
||||
"winning_percentage": 0.9405,
|
||||
"mutual_riders_count": 1543,
|
||||
"comparison_count": 523,
|
||||
"average_rating": 9.4,
|
||||
"last_calculated": "2024-01-15T02:00:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Get Ranking Details
|
||||
```http
|
||||
GET /api/v1/rankings/{ride-slug}/
|
||||
```
|
||||
|
||||
**Response**: Extended ranking data with full ride details, comparisons, and history
|
||||
|
||||
#### Get Ranking History
|
||||
```http
|
||||
GET /api/v1/rankings/{ride-slug}/history/
|
||||
```
|
||||
|
||||
**Response**: Array of ranking snapshots (last 90 days)
|
||||
|
||||
#### Get Head-to-Head Comparisons
|
||||
```http
|
||||
GET /api/v1/rankings/{ride-slug}/comparisons/
|
||||
```
|
||||
|
||||
**Response**: Array of comparison results with all other rides
|
||||
|
||||
#### Get Statistics
|
||||
```http
|
||||
GET /api/v1/rankings/statistics/
|
||||
```
|
||||
|
||||
**Response**: System-wide ranking statistics
|
||||
|
||||
#### Trigger Calculation (Admin)
|
||||
```http
|
||||
POST /api/v1/rankings/calculate/
|
||||
```
|
||||
|
||||
**Request Body**:
|
||||
```json
|
||||
{
|
||||
"category": "RC" // Optional
|
||||
}
|
||||
```
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"rides_ranked": 523,
|
||||
"comparisons_made": 136503,
|
||||
"duration": 45.23,
|
||||
"timestamp": "2024-01-15T02:00:45Z"
|
||||
}
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Frontend (Vue.js/TypeScript)
|
||||
|
||||
#### Display Top Rankings
|
||||
```typescript
|
||||
import { rankingsApi } from '@/services/api'
|
||||
|
||||
export default {
|
||||
async mounted() {
|
||||
try {
|
||||
// Get top 10 rankings
|
||||
const topRides = await rankingsApi.getTopRankings(10)
|
||||
this.rankings = topRides
|
||||
|
||||
// Get roller coasters only
|
||||
const response = await rankingsApi.getRankings({
|
||||
category: 'RC',
|
||||
page_size: 20,
|
||||
ordering: 'rank'
|
||||
})
|
||||
this.rollerCoasters = response.results
|
||||
} catch (error) {
|
||||
console.error('Failed to load rankings:', error)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Display Ranking Details
|
||||
```typescript
|
||||
// In a Vue component
|
||||
async loadRankingDetails(rideSlug: string) {
|
||||
const [details, history, comparisons] = await Promise.all([
|
||||
rankingsApi.getRankingDetail(rideSlug),
|
||||
rankingsApi.getRankingHistory(rideSlug),
|
||||
rankingsApi.getHeadToHeadComparisons(rideSlug)
|
||||
])
|
||||
|
||||
this.rankingDetails = details
|
||||
this.chartData = this.prepareChartData(history)
|
||||
this.comparisons = comparisons
|
||||
}
|
||||
```
|
||||
|
||||
#### Search Rankings
|
||||
```typescript
|
||||
async searchRides(query: string) {
|
||||
const results = await rankingsApi.searchRankings(query)
|
||||
this.searchResults = results
|
||||
}
|
||||
```
|
||||
|
||||
### Backend (Python/Django)
|
||||
|
||||
#### Access Rankings in Views
|
||||
```python
|
||||
from apps.rides.models import RideRanking
|
||||
|
||||
# Get top 10 rides
|
||||
top_rides = RideRanking.objects.select_related('ride', 'ride__park').order_by('rank')[:10]
|
||||
|
||||
# Get rankings for a specific category
|
||||
coaster_rankings = RideRanking.objects.filter(
|
||||
ride__category='RC'
|
||||
).order_by('rank')
|
||||
|
||||
# Get ranking with change indicator
|
||||
ranking = RideRanking.objects.get(ride__slug='millennium-force')
|
||||
if ranking.previous_rank:
|
||||
change = ranking.previous_rank - ranking.rank
|
||||
direction = 'up' if change > 0 else 'down' if change < 0 else 'same'
|
||||
```
|
||||
|
||||
#### Trigger Ranking Update
|
||||
```python
|
||||
from apps.rides.services.ranking_service import RideRankingService
|
||||
|
||||
# Update all rankings
|
||||
service = RideRankingService()
|
||||
result = service.update_all_rankings()
|
||||
|
||||
# Update specific category
|
||||
result = service.update_all_rankings(category='RC')
|
||||
|
||||
print(f"Ranked {result['rides_ranked']} rides")
|
||||
print(f"Made {result['comparisons_made']} comparisons")
|
||||
print(f"Duration: {result['duration']:.2f} seconds")
|
||||
```
|
||||
|
||||
### Command Line
|
||||
|
||||
```bash
|
||||
# Update rankings via management command
|
||||
uv run python manage.py update_ride_rankings
|
||||
|
||||
# Update only roller coasters
|
||||
uv run python manage.py update_ride_rankings --category RC
|
||||
|
||||
# Schedule daily updates with cron
|
||||
0 2 * * * cd /path/to/project && uv run python manage.py update_ride_rankings
|
||||
```
|
||||
|
||||
## Deployment & Maintenance
|
||||
|
||||
### Initial Setup
|
||||
|
||||
1. **Run Migrations**:
|
||||
```bash
|
||||
uv run python manage.py migrate
|
||||
```
|
||||
|
||||
2. **Initial Ranking Calculation**:
|
||||
```bash
|
||||
uv run python manage.py update_ride_rankings
|
||||
```
|
||||
|
||||
3. **Verify in Admin**:
|
||||
- Navigate to `/admin/rides/rideranking/`
|
||||
- Verify rankings are populated
|
||||
|
||||
### Scheduled Updates
|
||||
|
||||
Add to crontab for daily updates:
|
||||
```bash
|
||||
# Update rankings daily at 2 AM
|
||||
0 2 * * * cd /path/to/thrillwiki && uv run python manage.py update_ride_rankings
|
||||
|
||||
# Optional: Update different categories at different times
|
||||
0 2 * * * cd /path/to/thrillwiki && uv run python manage.py update_ride_rankings --category RC
|
||||
0 3 * * * cd /path/to/thrillwiki && uv run python manage.py update_ride_rankings --category DR
|
||||
```
|
||||
|
||||
### Monitoring
|
||||
|
||||
1. **Check Logs**:
|
||||
```bash
|
||||
tail -f /path/to/logs/ranking_updates.log
|
||||
```
|
||||
|
||||
2. **Monitor Performance**:
|
||||
- Track calculation duration via API statistics endpoint
|
||||
- Monitor database query performance
|
||||
- Check comparison cache hit rates
|
||||
|
||||
3. **Data Validation**:
|
||||
```python
|
||||
# Check for ranking anomalies
|
||||
from apps.rides.models import RideRanking
|
||||
|
||||
# Verify all ranks are unique
|
||||
ranks = RideRanking.objects.values_list('rank', flat=True)
|
||||
assert len(ranks) == len(set(ranks))
|
||||
|
||||
# Check winning percentage calculation
|
||||
for ranking in RideRanking.objects.all():
|
||||
expected = (ranking.wins + 0.5 * ranking.ties) / ranking.comparison_count
|
||||
assert abs(ranking.winning_percentage - expected) < 0.001
|
||||
```
|
||||
|
||||
### Performance Optimization
|
||||
|
||||
1. **Database Indexes**:
|
||||
```sql
|
||||
-- Ensure these indexes exist
|
||||
CREATE INDEX idx_ranking_rank ON rides_rideranking(rank);
|
||||
CREATE INDEX idx_ranking_ride ON rides_rideranking(ride_id);
|
||||
CREATE INDEX idx_comparison_rides ON rides_ridepaircomparison(ride_a_id, ride_b_id);
|
||||
```
|
||||
|
||||
2. **Cache Configuration**:
|
||||
```python
|
||||
# settings.py
|
||||
CACHES = {
|
||||
'rankings': {
|
||||
'BACKEND': 'django.core.cache.backends.redis.RedisCache',
|
||||
'LOCATION': 'redis://127.0.0.1:6379/2',
|
||||
'TIMEOUT': 3600, # 1 hour
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. **Batch Processing**:
|
||||
- Process comparisons in batches of 1000
|
||||
- Use bulk_create for database inserts
|
||||
- Consider parallel processing for large datasets
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
**Common Issues**:
|
||||
|
||||
1. **Rankings not updating**:
|
||||
- Check cron job is running
|
||||
- Verify database connectivity
|
||||
- Check for lock files preventing concurrent runs
|
||||
|
||||
2. **Incorrect rankings**:
|
||||
- Clear comparison cache and recalculate
|
||||
- Verify rating data integrity
|
||||
- Check for duplicate user ratings
|
||||
|
||||
3. **Performance issues**:
|
||||
- Analyze slow queries with Django Debug Toolbar
|
||||
- Consider increasing database resources
|
||||
- Implement incremental updates for large datasets
|
||||
|
||||
### API Rate Limiting
|
||||
|
||||
Configure in `settings.py`:
|
||||
```python
|
||||
REST_FRAMEWORK = {
|
||||
'DEFAULT_THROTTLE_CLASSES': [
|
||||
'rest_framework.throttling.AnonRateThrottle',
|
||||
'rest_framework.throttling.UserRateThrottle'
|
||||
],
|
||||
'DEFAULT_THROTTLE_RATES': {
|
||||
'anon': '100/hour',
|
||||
'user': '1000/hour'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Architecture Decisions
|
||||
|
||||
### Why Pairwise Comparison?
|
||||
- **Fairness**: Only compares rides among users who've experienced both
|
||||
- **Reduces Bias**: Popular rides aren't advantaged over less-ridden ones
|
||||
- **Head-to-Head Logic**: Direct comparisons matter for tie-breaking
|
||||
- **Robust to Outliers**: One extreme rating doesn't skew results
|
||||
|
||||
### Caching Strategy
|
||||
- **Comparison Cache**: Store pairwise results to avoid recalculation
|
||||
- **Snapshot History**: Keep 365 days of historical data
|
||||
- **API Response Cache**: Cache ranking lists for 1 hour
|
||||
|
||||
### Scalability Considerations
|
||||
- **O(n²) Complexity**: Scales quadratically with number of rides
|
||||
- **Batch Processing**: Process in chunks to manage memory
|
||||
- **Incremental Updates**: Future enhancement for real-time updates
|
||||
|
||||
## Testing
|
||||
|
||||
### Unit Tests
|
||||
```python
|
||||
# apps/rides/tests/test_ranking_service.py
|
||||
class RankingServiceTestCase(TestCase):
|
||||
def test_pairwise_comparison(self):
|
||||
"""Test comparison logic between two rides"""
|
||||
|
||||
def test_ranking_calculation(self):
|
||||
"""Test overall ranking calculation"""
|
||||
|
||||
def test_tiebreaker_logic(self):
|
||||
"""Test head-to-head tiebreaker"""
|
||||
```
|
||||
|
||||
### Integration Tests
|
||||
```python
|
||||
# apps/api/tests/test_ranking_api.py
|
||||
class RankingAPITestCase(APITestCase):
|
||||
def test_get_rankings_list(self):
|
||||
"""Test ranking list endpoint"""
|
||||
|
||||
def test_filtering_and_ordering(self):
|
||||
"""Test query parameters"""
|
||||
|
||||
def test_calculation_trigger(self):
|
||||
"""Test admin calculation endpoint"""
|
||||
```
|
||||
|
||||
### Frontend Tests
|
||||
```typescript
|
||||
// frontend/tests/api/rankings.test.ts
|
||||
describe('Rankings API', () => {
|
||||
it('should fetch top rankings', async () => {
|
||||
const rankings = await rankingsApi.getTopRankings(10)
|
||||
expect(rankings).toHaveLength(10)
|
||||
expect(rankings[0].rank).toBe(1)
|
||||
})
|
||||
|
||||
it('should filter by category', async () => {
|
||||
const response = await rankingsApi.getRankings({ category: 'RC' })
|
||||
expect(response.results.every(r => r.ride.category === 'RC')).toBe(true)
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### Planned Features
|
||||
1. **Real-time Updates**: Update rankings immediately after new reviews
|
||||
2. **Regional Rankings**: Rankings by geographic region
|
||||
3. **Time-Period Rankings**: Best new rides, best classic rides
|
||||
4. **User Preferences**: Personalized rankings based on user history
|
||||
5. **Confidence Intervals**: Statistical confidence for rankings
|
||||
6. **Mobile App API**: Optimized endpoints for mobile applications
|
||||
|
||||
### Potential Optimizations
|
||||
1. **Incremental Updates**: Only recalculate affected comparisons
|
||||
2. **Parallel Processing**: Distribute calculation across workers
|
||||
3. **Machine Learning**: Predict rankings for new rides
|
||||
4. **GraphQL API**: More flexible data fetching
|
||||
5. **WebSocket Updates**: Real-time ranking changes
|
||||
|
||||
## Support & Documentation
|
||||
|
||||
### Additional Resources
|
||||
- [Original IRCP Algorithm](https://ushsho.com/ridesurvey.py)
|
||||
- [Django REST Framework Documentation](https://www.django-rest-framework.org/)
|
||||
- [Vue.js Documentation](https://vuejs.org/)
|
||||
- [TypeScript Documentation](https://www.typescriptlang.org/)
|
||||
|
||||
### Contact
|
||||
For questions or issues related to the ranking system:
|
||||
- Create an issue in the project repository
|
||||
- Contact the development team
|
||||
- Check the troubleshooting section above
|
||||
|
||||
---
|
||||
|
||||
*Last Updated: January 2025*
|
||||
*Version: 1.0*
|
||||
608
docs/ride-ranking-system.md
Normal file
608
docs/ride-ranking-system.md
Normal file
@@ -0,0 +1,608 @@
|
||||
# Ride Ranking System Documentation
|
||||
|
||||
## Overview
|
||||
|
||||
The ThrillWiki ride ranking system implements the **Internet Roller Coaster Poll (IRCP) algorithm** to calculate fair, accurate rankings for all rides based on user ratings. This system provides daily-updated rankings that reflect the collective preferences of the community.
|
||||
|
||||
## Algorithm Description
|
||||
|
||||
### Original Internet Roller Coaster Poll Algorithm
|
||||
|
||||
The system implements the exact methodology from the Internet Roller Coaster Poll:
|
||||
|
||||
> "Each coaster is compared one at a time to every other coaster to see whether more people who have ridden both of them preferred one or the other. A coaster is given a 'Win' for each coaster that more mutual riders ranked behind it, given a 'Loss' for each coaster that more mutual riders ranked ahead of it, and given a 'Tie' for each coaster that the same number of mutual riders ranked above it and below it. Coasters are ranked by their overall winning percentage (where ties count as half of a win and half of a loss). In the event that two coasters end up with identical winning percentages, the tie is broken (if possible) by determining which of the two won the mutual rider comparison between those two coasters."
|
||||
|
||||
### Our Implementation
|
||||
|
||||
The ThrillWiki implementation adapts this algorithm to work with our existing rating system:
|
||||
|
||||
1. **"Have You Ridden" Detection**: A user is considered to have ridden a ride if they have submitted a rating/review for it
|
||||
2. **Preference Determination**: Higher ratings indicate preference (e.g., a user rating Ride A as 8/10 and Ride B as 6/10 prefers Ride A)
|
||||
3. **Pairwise Comparisons**: Every ride is compared to every other ride based on mutual riders
|
||||
4. **Winning Percentage**: Calculated as `(wins + 0.5 * ties) / total_comparisons`
|
||||
5. **Tie Breaking**: Head-to-head comparisons resolve ties in winning percentage
|
||||
|
||||
## System Architecture
|
||||
|
||||
### Database Models
|
||||
|
||||
#### RideRanking
|
||||
Stores the calculated ranking for each ride:
|
||||
- `rank`: Overall rank position (1 = best)
|
||||
- `wins`: Number of rides this ride beats in pairwise comparisons
|
||||
- `losses`: Number of rides that beat this ride
|
||||
- `ties`: Number of rides with equal preference
|
||||
- `winning_percentage`: Win percentage where ties count as 0.5
|
||||
- `mutual_riders_count`: Total users who have rated this ride
|
||||
- `average_rating`: Average rating from all users
|
||||
- `last_calculated`: Timestamp of last calculation
|
||||
|
||||
#### RidePairComparison
|
||||
Caches pairwise comparison results between two rides:
|
||||
- `ride_a`, `ride_b`: The two rides being compared
|
||||
- `ride_a_wins`: Number of mutual riders who rated ride_a higher
|
||||
- `ride_b_wins`: Number of mutual riders who rated ride_b higher
|
||||
- `ties`: Number of mutual riders who rated both equally
|
||||
- `mutual_riders_count`: Total users who rated both rides
|
||||
|
||||
#### RankingSnapshot
|
||||
Historical tracking of rankings:
|
||||
- `ride`: The ride being tracked
|
||||
- `rank`: Rank on the snapshot date
|
||||
- `winning_percentage`: Win percentage on the snapshot date
|
||||
- `snapshot_date`: Date of the snapshot
|
||||
|
||||
### Service Layer
|
||||
|
||||
The `RideRankingService` (`apps/rides/services/ranking_service.py`) implements the core algorithm:
|
||||
|
||||
```python
|
||||
service = RideRankingService()
|
||||
result = service.update_all_rankings(category='RC') # Optional category filter
|
||||
```
|
||||
|
||||
Key methods:
|
||||
- `update_all_rankings()`: Main entry point for ranking calculation
|
||||
- `_calculate_pairwise_comparison()`: Compares two rides
|
||||
- `_calculate_rankings_from_comparisons()`: Converts comparisons to rankings
|
||||
- `_apply_tiebreakers()`: Resolves ties using head-to-head comparisons
|
||||
|
||||
## Usage
|
||||
|
||||
### Manual Ranking Update
|
||||
|
||||
Run the management command to update rankings:
|
||||
|
||||
```bash
|
||||
# Update all ride rankings
|
||||
uv run python manage.py update_ride_rankings
|
||||
|
||||
# Update only roller coaster rankings
|
||||
uv run python manage.py update_ride_rankings --category RC
|
||||
```
|
||||
|
||||
### Scheduled Updates (Cron)
|
||||
|
||||
Add to crontab for daily updates at 2 AM:
|
||||
|
||||
```bash
|
||||
0 2 * * * cd /Users/talor/thrillwiki_django_no_react/backend && uv run python manage.py update_ride_rankings
|
||||
```
|
||||
|
||||
### Accessing Rankings
|
||||
|
||||
#### Django Admin
|
||||
View rankings at `/admin/rides/rideranking/`
|
||||
|
||||
#### In Code
|
||||
```python
|
||||
from apps.rides.models import RideRanking
|
||||
|
||||
# Get top 10 rides
|
||||
top_rides = RideRanking.objects.select_related('ride').order_by('rank')[:10]
|
||||
|
||||
# Get ranking for specific ride
|
||||
ranking = RideRanking.objects.get(ride__slug='millennium-force')
|
||||
print(f"Rank: #{ranking.rank}, Win%: {ranking.winning_percentage:.1%}")
|
||||
```
|
||||
|
||||
## Web Interface
|
||||
|
||||
### Views
|
||||
|
||||
The ranking system provides comprehensive web views for browsing and analyzing rankings.
|
||||
|
||||
#### Rankings List View
|
||||
**URL**: `/rides/rankings/`
|
||||
|
||||
**Features**:
|
||||
- Paginated list of all ranked rides (50 per page)
|
||||
- Filter by ride category (Roller Coasters, Dark Rides, etc.)
|
||||
- Filter by minimum mutual riders to ensure statistical significance
|
||||
- HTMX support for dynamic updates without page refresh
|
||||
- Shows key metrics: rank, ride name, park, wins/losses/ties, winning percentage
|
||||
|
||||
**Query Parameters**:
|
||||
- `category`: Filter by ride type (RC, DR, FR, WR, TR, OT)
|
||||
- `min_riders`: Minimum number of mutual riders required (e.g., 100)
|
||||
- `page`: Page number for pagination
|
||||
|
||||
**Example URLs**:
|
||||
```
|
||||
/rides/rankings/ # All rankings
|
||||
/rides/rankings/?category=RC # Only roller coasters
|
||||
/rides/rankings/?category=RC&min_riders=50 # Roller coasters with 50+ mutual riders
|
||||
```
|
||||
|
||||
#### Ranking Detail View
|
||||
**URL**: `/rides/rankings/<ride-slug>/`
|
||||
|
||||
**Features**:
|
||||
- Comprehensive ranking metrics for a specific ride
|
||||
- Head-to-head comparison results with other rides
|
||||
- Historical ranking chart showing trends over time
|
||||
- Rank movement indicators (up/down from previous calculation)
|
||||
- Detailed breakdown of wins, losses, and ties
|
||||
|
||||
**HTMX Endpoints**:
|
||||
- `/rides/rankings/<ride-slug>/history-chart/` - Returns ranking history chart data
|
||||
- `/rides/rankings/<ride-slug>/comparisons/` - Returns detailed comparison table
|
||||
|
||||
### Template Integration
|
||||
|
||||
The views use Django templates with HTMX for dynamic updates:
|
||||
|
||||
```django
|
||||
{# rankings.html - Main rankings page #}
|
||||
{% extends "base.html" %}
|
||||
{% block content %}
|
||||
<div class="rankings-container">
|
||||
<h1>Ride Rankings</h1>
|
||||
|
||||
{# Filters #}
|
||||
<form hx-get="/rides/rankings/" hx-target="#rankings-table">
|
||||
<select name="category">
|
||||
<option value="all">All Rides</option>
|
||||
<option value="RC">Roller Coasters</option>
|
||||
<option value="DR">Dark Rides</option>
|
||||
<!-- etc. -->
|
||||
</select>
|
||||
<input type="number" name="min_riders" placeholder="Min riders">
|
||||
<button type="submit">Filter</button>
|
||||
</form>
|
||||
|
||||
{# Rankings table #}
|
||||
<div id="rankings-table">
|
||||
{% include "rides/partials/rankings_table.html" %}
|
||||
</div>
|
||||
</div>
|
||||
{% endblock %}
|
||||
```
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### RESTful API
|
||||
|
||||
The ranking system exposes a comprehensive REST API for programmatic access.
|
||||
|
||||
#### List Rankings
|
||||
**Endpoint**: `GET /api/v1/rankings/`
|
||||
|
||||
**Description**: Get paginated list of ride rankings
|
||||
|
||||
**Query Parameters**:
|
||||
- `category` (string): Filter by ride category (RC, DR, FR, WR, TR, OT)
|
||||
- `min_riders` (integer): Minimum mutual riders required
|
||||
- `park` (string): Filter by park slug
|
||||
- `ordering` (string): Sort order (rank, -rank, winning_percentage, -winning_percentage)
|
||||
- `page` (integer): Page number
|
||||
- `page_size` (integer): Results per page (default: 20)
|
||||
|
||||
**Response Example**:
|
||||
```json
|
||||
{
|
||||
"count": 523,
|
||||
"next": "http://api.example.com/api/v1/rankings/?page=2",
|
||||
"previous": null,
|
||||
"results": [
|
||||
{
|
||||
"id": 1,
|
||||
"rank": 1,
|
||||
"ride": {
|
||||
"id": 123,
|
||||
"name": "Steel Vengeance",
|
||||
"slug": "steel-vengeance",
|
||||
"park": {
|
||||
"id": 45,
|
||||
"name": "Cedar Point",
|
||||
"slug": "cedar-point"
|
||||
},
|
||||
"category": "RC"
|
||||
},
|
||||
"wins": 487,
|
||||
"losses": 23,
|
||||
"ties": 13,
|
||||
"winning_percentage": 0.9405,
|
||||
"mutual_riders_count": 1543,
|
||||
"comparison_count": 523,
|
||||
"average_rating": 9.4,
|
||||
"last_calculated": "2024-01-15T02:00:00Z",
|
||||
"rank_change": 0,
|
||||
"previous_rank": 1
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Get Ranking Details
|
||||
**Endpoint**: `GET /api/v1/rankings/<ride-slug>/`
|
||||
|
||||
**Description**: Get detailed ranking information for a specific ride
|
||||
|
||||
**Response**: Includes full ride details, head-to-head comparisons, and ranking history
|
||||
|
||||
```json
|
||||
{
|
||||
"id": 1,
|
||||
"rank": 1,
|
||||
"ride": {
|
||||
"id": 123,
|
||||
"name": "Steel Vengeance",
|
||||
"slug": "steel-vengeance",
|
||||
"description": "Hybrid roller coaster...",
|
||||
"park": {
|
||||
"id": 45,
|
||||
"name": "Cedar Point",
|
||||
"slug": "cedar-point",
|
||||
"location": {
|
||||
"city": "Sandusky",
|
||||
"state": "Ohio",
|
||||
"country": "United States"
|
||||
}
|
||||
},
|
||||
"category": "RC",
|
||||
"manufacturer": {
|
||||
"id": 12,
|
||||
"name": "Rocky Mountain Construction"
|
||||
},
|
||||
"opening_date": "2018-05-05",
|
||||
"status": "OPERATING"
|
||||
},
|
||||
"wins": 487,
|
||||
"losses": 23,
|
||||
"ties": 13,
|
||||
"winning_percentage": 0.9405,
|
||||
"mutual_riders_count": 1543,
|
||||
"comparison_count": 523,
|
||||
"average_rating": 9.4,
|
||||
"last_calculated": "2024-01-15T02:00:00Z",
|
||||
"calculation_version": "1.0",
|
||||
"head_to_head_comparisons": [
|
||||
{
|
||||
"opponent": {
|
||||
"id": 124,
|
||||
"name": "Fury 325",
|
||||
"slug": "fury-325",
|
||||
"park": "Carowinds"
|
||||
},
|
||||
"wins": 234,
|
||||
"losses": 189,
|
||||
"ties": 23,
|
||||
"result": "win",
|
||||
"mutual_riders": 446
|
||||
}
|
||||
],
|
||||
"ranking_history": [
|
||||
{
|
||||
"date": "2024-01-15",
|
||||
"rank": 1,
|
||||
"winning_percentage": 0.9405
|
||||
},
|
||||
{
|
||||
"date": "2024-01-14",
|
||||
"rank": 1,
|
||||
"winning_percentage": 0.9398
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Get Ranking History
|
||||
**Endpoint**: `GET /api/v1/rankings/<ride-slug>/history/`
|
||||
|
||||
**Description**: Get historical ranking data for a specific ride (last 90 days)
|
||||
|
||||
**Response**: Array of ranking snapshots
|
||||
|
||||
#### Get Head-to-Head Comparisons
|
||||
**Endpoint**: `GET /api/v1/rankings/<ride-slug>/comparisons/`
|
||||
|
||||
**Description**: Get detailed head-to-head comparison data for a ride
|
||||
|
||||
**Response**: Array of comparison results with all rides
|
||||
|
||||
#### Get Ranking Statistics
|
||||
**Endpoint**: `GET /api/v1/rankings/statistics/`
|
||||
|
||||
**Description**: Get system-wide ranking statistics
|
||||
|
||||
**Response Example**:
|
||||
```json
|
||||
{
|
||||
"total_ranked_rides": 523,
|
||||
"total_comparisons": 136503,
|
||||
"last_calculation_time": "2024-01-15T02:00:00Z",
|
||||
"calculation_duration": 45.23,
|
||||
"top_rated_ride": {
|
||||
"id": 123,
|
||||
"name": "Steel Vengeance",
|
||||
"slug": "steel-vengeance",
|
||||
"park": "Cedar Point",
|
||||
"rank": 1,
|
||||
"winning_percentage": 0.9405,
|
||||
"average_rating": 9.4
|
||||
},
|
||||
"most_compared_ride": {
|
||||
"id": 456,
|
||||
"name": "Millennium Force",
|
||||
"slug": "millennium-force",
|
||||
"park": "Cedar Point",
|
||||
"comparison_count": 521
|
||||
},
|
||||
"biggest_rank_change": {
|
||||
"ride": {
|
||||
"id": 789,
|
||||
"name": "Iron Gwazi",
|
||||
"slug": "iron-gwazi"
|
||||
},
|
||||
"current_rank": 3,
|
||||
"previous_rank": 8,
|
||||
"change": 5
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Trigger Ranking Calculation (Admin)
|
||||
**Endpoint**: `POST /api/v1/rankings/calculate/`
|
||||
|
||||
**Description**: Manually trigger ranking calculation (requires admin authentication)
|
||||
|
||||
**Request Body**:
|
||||
```json
|
||||
{
|
||||
"category": "RC" // Optional - filter to specific category
|
||||
}
|
||||
```
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"rides_ranked": 523,
|
||||
"comparisons_made": 136503,
|
||||
"duration": 45.23,
|
||||
"timestamp": "2024-01-15T02:00:45Z"
|
||||
}
|
||||
```
|
||||
|
||||
### API Authentication
|
||||
|
||||
- Read endpoints (GET): No authentication required
|
||||
- Calculation endpoint (POST): Requires admin authentication via token or session
|
||||
|
||||
### API Rate Limiting
|
||||
|
||||
- Anonymous users: 100 requests per hour
|
||||
- Authenticated users: 1000 requests per hour
|
||||
- Admin users: No rate limiting
|
||||
|
||||
### Python Client Example
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
# Get top 10 rankings
|
||||
response = requests.get(
|
||||
"https://api.thrillwiki.com/api/v1/rankings/",
|
||||
params={"page_size": 10}
|
||||
)
|
||||
rankings = response.json()["results"]
|
||||
|
||||
for ranking in rankings:
|
||||
print(f"#{ranking['rank']}: {ranking['ride']['name']} - {ranking['winning_percentage']:.1%}")
|
||||
|
||||
# Get specific ride ranking
|
||||
response = requests.get(
|
||||
"https://api.thrillwiki.com/api/v1/rankings/steel-vengeance/"
|
||||
)
|
||||
details = response.json()
|
||||
print(f"{details['ride']['name']} is ranked #{details['rank']}")
|
||||
print(f"Wins: {details['wins']}, Losses: {details['losses']}, Ties: {details['ties']}")
|
||||
```
|
||||
|
||||
### JavaScript/TypeScript Client Example
|
||||
|
||||
```typescript
|
||||
// Using fetch API
|
||||
async function getTopRankings(limit: number = 10) {
|
||||
const response = await fetch(
|
||||
`https://api.thrillwiki.com/api/v1/rankings/?page_size=${limit}`
|
||||
);
|
||||
const data = await response.json();
|
||||
return data.results;
|
||||
}
|
||||
|
||||
// Using axios
|
||||
import axios from 'axios';
|
||||
|
||||
const api = axios.create({
|
||||
baseURL: 'https://api.thrillwiki.com/api/v1',
|
||||
});
|
||||
|
||||
async function getRideRanking(slug: string) {
|
||||
const { data } = await api.get(`/rankings/${slug}/`);
|
||||
return data;
|
||||
}
|
||||
```
|
||||
|
||||
## Calculation Example
|
||||
|
||||
### Scenario
|
||||
Three rides with the following ratings from users:
|
||||
|
||||
**Ride A** (Millennium Force):
|
||||
- User 1: 10/10
|
||||
- User 2: 9/10
|
||||
- User 3: 8/10
|
||||
|
||||
**Ride B** (Maverick):
|
||||
- User 1: 9/10
|
||||
- User 2: 10/10
|
||||
- User 4: 8/10
|
||||
|
||||
**Ride C** (Top Thrill 2):
|
||||
- User 1: 8/10
|
||||
- User 3: 9/10
|
||||
- User 4: 10/10
|
||||
|
||||
### Pairwise Comparisons
|
||||
|
||||
**A vs B** (mutual riders: Users 1, 2):
|
||||
- User 1: A(10) > B(9) → A wins
|
||||
- User 2: A(9) < B(10) → B wins
|
||||
- Result: 1 win each, 0 ties
|
||||
|
||||
**A vs C** (mutual riders: Users 1, 3):
|
||||
- User 1: A(10) > C(8) → A wins
|
||||
- User 3: A(8) < C(9) → C wins
|
||||
- Result: 1 win each, 0 ties
|
||||
|
||||
**B vs C** (mutual riders: Users 1, 4):
|
||||
- User 1: B(9) > C(8) → B wins
|
||||
- User 4: B(8) < C(10) → C wins
|
||||
- Result: 1 win each, 0 ties
|
||||
|
||||
### Final Rankings
|
||||
All three rides have:
|
||||
- Wins: 1
|
||||
- Losses: 1
|
||||
- Ties: 0
|
||||
- Winning Percentage: 50%
|
||||
|
||||
Since all have identical winning percentages, the system would use additional criteria (mutual rider count, average rating) to determine final order.
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Optimization Strategies
|
||||
|
||||
1. **Caching**: Pairwise comparisons are cached in `RidePairComparison` table
|
||||
2. **Batch Processing**: Comparisons calculated in bulk to minimize database queries
|
||||
3. **Indexes**: Database indexes on frequently queried fields:
|
||||
- `rank` for ordering
|
||||
- `winning_percentage` for sorting
|
||||
- `ride_id` for lookups
|
||||
- Composite indexes for complex queries
|
||||
|
||||
### Scalability
|
||||
|
||||
The algorithm complexity is O(n²) where n is the number of rides, as each ride must be compared to every other ride. For 1,000 rides:
|
||||
- Comparisons needed: 499,500
|
||||
- Estimated processing time: 30-60 seconds
|
||||
- Database storage: ~20MB for comparison cache
|
||||
|
||||
## Monitoring and Maintenance
|
||||
|
||||
### Admin Interface
|
||||
|
||||
Access Django admin to:
|
||||
- View current rankings
|
||||
- Inspect pairwise comparisons
|
||||
- Track historical changes
|
||||
- Monitor calculation performance
|
||||
|
||||
### Logging
|
||||
|
||||
The system logs:
|
||||
- Calculation start/end times
|
||||
- Number of rides ranked
|
||||
- Number of comparisons made
|
||||
- Any errors or warnings
|
||||
|
||||
Example log output:
|
||||
```
|
||||
INFO: Starting ranking calculation for category: RC
|
||||
INFO: Found 523 rides to rank
|
||||
DEBUG: Processed 100/136503 comparisons
|
||||
INFO: Ranking calculation completed in 45.23 seconds
|
||||
```
|
||||
|
||||
### Data Cleanup
|
||||
|
||||
Old ranking snapshots are automatically cleaned up after 365 days to manage database size.
|
||||
|
||||
## Algorithm Validity
|
||||
|
||||
### Mathematical Properties
|
||||
|
||||
1. **Transitivity**: Not enforced (A > B and B > C doesn't guarantee A > C)
|
||||
2. **Consistency**: Same input always produces same output
|
||||
3. **Fairness**: Every mutual rider's opinion counts equally
|
||||
4. **Completeness**: All possible comparisons are considered
|
||||
|
||||
### Advantages Over Simple Averaging
|
||||
|
||||
1. **Reduces Selection Bias**: Only compares rides among users who've experienced both
|
||||
2. **Fair Comparisons**: Popular rides aren't advantaged over less-ridden ones
|
||||
3. **Head-to-Head Logic**: Direct comparisons matter for tie-breaking
|
||||
4. **Robust to Outliers**: One extremely high/low rating doesn't skew results
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### Planned Features
|
||||
|
||||
1. **Real-time Updates**: Update rankings immediately after new reviews
|
||||
2. **Category-Specific Rankings**: Separate rankings for different ride types
|
||||
3. **Regional Rankings**: Rankings by geographic region
|
||||
4. **Time-Period Rankings**: Best new rides, best classic rides
|
||||
5. **API Endpoints**: RESTful API for ranking data
|
||||
6. **Ranking Trends**: Visualizations of ranking changes over time
|
||||
|
||||
### Potential Optimizations
|
||||
|
||||
1. **Incremental Updates**: Only recalculate affected comparisons
|
||||
2. **Parallel Processing**: Distribute comparison calculations
|
||||
3. **Machine Learning**: Predict rankings for new rides
|
||||
4. **Weighted Ratings**: Consider recency or reviewer credibility
|
||||
|
||||
## Technical Details
|
||||
|
||||
### File Structure
|
||||
```
|
||||
apps/rides/
|
||||
├── models/
|
||||
│ └── rankings.py # Ranking data models
|
||||
├── services/
|
||||
│ └── ranking_service.py # Algorithm implementation
|
||||
├── management/
|
||||
│ └── commands/
|
||||
│ └── update_ride_rankings.py # CLI command
|
||||
└── admin.py # Admin interface configuration
|
||||
```
|
||||
|
||||
### Database Tables
|
||||
- `rides_rideranking`: Current rankings
|
||||
- `rides_ridepaircomparison`: Cached comparisons
|
||||
- `rides_rankingsnapshot`: Historical data
|
||||
- `rides_riderankingevent`: Audit trail (pghistory)
|
||||
- `rides_ridepaircomparisonevent`: Audit trail (pghistory)
|
||||
|
||||
### Dependencies
|
||||
- Django 5.2+
|
||||
- PostgreSQL with pghistory
|
||||
- Python 3.11+
|
||||
|
||||
## References
|
||||
|
||||
- [Internet Roller Coaster Poll](https://ushsho.com/ridesurvey.py) - Original algorithm source
|
||||
- [Condorcet Method](https://en.wikipedia.org/wiki/Condorcet_method) - Similar voting system theory
|
||||
- [Pairwise Comparison](https://en.wikipedia.org/wiki/Pairwise_comparison) - Mathematical foundation
|
||||
164
docs/system-architecture-diagram.md
Normal file
164
docs/system-architecture-diagram.md
Normal file
@@ -0,0 +1,164 @@
|
||||
# ThrillWiki Trending System - Technical Architecture
|
||||
|
||||
## System Components Overview
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph "Frontend Layer"
|
||||
A[Home.vue Component] --> B[API Service Layer]
|
||||
B --> C[Trending Content Display]
|
||||
B --> D[New Content Display]
|
||||
end
|
||||
|
||||
subgraph "API Layer"
|
||||
E[/api/v1/trending/] --> F[TrendingViewSet]
|
||||
G[/api/v1/new-content/] --> H[NewContentViewSet]
|
||||
F --> I[TrendingSerializer]
|
||||
H --> J[NewContentSerializer]
|
||||
end
|
||||
|
||||
subgraph "Business Logic"
|
||||
K[Trending Algorithm] --> L[Score Calculator]
|
||||
L --> M[Weight Processor]
|
||||
M --> N[Ranking Engine]
|
||||
end
|
||||
|
||||
subgraph "Data Layer"
|
||||
O[PageView Model] --> P[View Tracker]
|
||||
Q[Park Model] --> R[Content Source]
|
||||
S[Ride Model] --> R
|
||||
T[pghistory Events] --> U[Change Tracker]
|
||||
end
|
||||
|
||||
subgraph "Caching Layer"
|
||||
V[Redis Cache] --> W[Trending Cache]
|
||||
V --> X[New Content Cache]
|
||||
W --> Y[6hr TTL]
|
||||
X --> Z[24hr TTL]
|
||||
end
|
||||
|
||||
subgraph "Background Tasks"
|
||||
AA[Management Command] --> BB[Calculate Trending]
|
||||
CC[Celery/Cron Scheduler] --> AA
|
||||
BB --> K
|
||||
BB --> V
|
||||
end
|
||||
|
||||
subgraph "Middleware"
|
||||
DD[View Tracking Middleware] --> O
|
||||
EE[User Request] --> DD
|
||||
end
|
||||
|
||||
A --> E
|
||||
A --> G
|
||||
F --> K
|
||||
H --> U
|
||||
K --> O
|
||||
K --> Q
|
||||
K --> S
|
||||
F --> V
|
||||
H --> V
|
||||
EE --> A
|
||||
```
|
||||
|
||||
## Data Flow Architecture
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant F as Frontend
|
||||
participant API as API Layer
|
||||
participant C as Cache
|
||||
participant BG as Background Job
|
||||
participant DB as Database
|
||||
participant M as Middleware
|
||||
|
||||
Note over U,M: Page View Tracking
|
||||
U->>F: Visit Park/Ride Page
|
||||
F->>M: HTTP Request
|
||||
M->>DB: Store PageView Record
|
||||
|
||||
Note over U,M: Trending Content Request
|
||||
U->>F: Load Home Page
|
||||
F->>API: GET /api/v1/trending/?tab=rides
|
||||
API->>C: Check Cache
|
||||
alt Cache Hit
|
||||
C->>API: Return Cached Data
|
||||
else Cache Miss
|
||||
API->>DB: Query Trending Data
|
||||
DB->>API: Raw Data
|
||||
API->>API: Apply Algorithm
|
||||
API->>C: Store in Cache
|
||||
end
|
||||
API->>F: Trending Response
|
||||
F->>U: Display Trending Content
|
||||
|
||||
Note over U,M: Background Processing
|
||||
BG->>DB: Aggregate PageViews
|
||||
BG->>DB: Calculate Scores
|
||||
BG->>C: Update Cache
|
||||
```
|
||||
|
||||
## Algorithm Flow
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Start Trending Calculation] --> B[Fetch Recent PageViews]
|
||||
B --> C[Group by Content Type/ID]
|
||||
C --> D[Calculate View Score]
|
||||
D --> E[Fetch Content Ratings]
|
||||
E --> F[Calculate Rating Score]
|
||||
F --> G[Calculate Recency Score]
|
||||
G --> H[Apply Weighted Formula]
|
||||
H --> I{Score > Threshold?}
|
||||
I -->|Yes| J[Add to Trending List]
|
||||
I -->|No| K[Skip Item]
|
||||
J --> L[Sort by Final Score]
|
||||
K --> L
|
||||
L --> M[Cache Results]
|
||||
M --> N[End]
|
||||
```
|
||||
|
||||
## Database Schema Relationships
|
||||
|
||||
```mermaid
|
||||
erDiagram
|
||||
PageView ||--o{ ContentType : references
|
||||
PageView {
|
||||
id bigint PK
|
||||
content_type_id int FK
|
||||
object_id int
|
||||
user_session varchar
|
||||
ip_address inet
|
||||
user_agent text
|
||||
timestamp datetime
|
||||
}
|
||||
|
||||
Park ||--o{ PageView : tracked_in
|
||||
Park {
|
||||
id int PK
|
||||
name varchar
|
||||
slug varchar
|
||||
average_rating decimal
|
||||
status varchar
|
||||
opening_date date
|
||||
closing_date date
|
||||
}
|
||||
|
||||
Ride ||--o{ PageView : tracked_in
|
||||
Ride {
|
||||
id int PK
|
||||
name varchar
|
||||
slug varchar
|
||||
park_id int FK
|
||||
average_rating decimal
|
||||
category varchar
|
||||
status varchar
|
||||
opening_date date
|
||||
}
|
||||
|
||||
TrendingCache {
|
||||
key varchar PK
|
||||
data json
|
||||
expires_at datetime
|
||||
}
|
||||
140
docs/trending-system-architecture.md
Normal file
140
docs/trending-system-architecture.md
Normal file
@@ -0,0 +1,140 @@
|
||||
# ThrillWiki Trending & New Content System Architecture
|
||||
|
||||
## System Overview
|
||||
|
||||
This document outlines the architecture for implementing real trending and new content functionality to replace the current mock data implementation on the ThrillWiki home page.
|
||||
|
||||
## Current State Analysis
|
||||
|
||||
### Frontend Structure (Vue 3 + TypeScript)
|
||||
- **Home.vue** expects specific data formats:
|
||||
- **Trending Content**: `{id, name, location, category, rating, rank, views, views_change, slug}`
|
||||
- **New Content**: `{id, name, location, category, date_added, slug}`
|
||||
- **Tabs Supported**:
|
||||
- Trending: Rides, Parks, Reviews
|
||||
- New: Recently Added, Newly Opened, Upcoming
|
||||
|
||||
### Backend Infrastructure
|
||||
- **Django REST Framework** with comprehensive ViewSets
|
||||
- **pghistory** already tracking model changes
|
||||
- **Existing endpoints** for recent changes, openings, closures
|
||||
- **Models**: Park and Ride with ratings, status, dates
|
||||
|
||||
## Proposed Architecture
|
||||
|
||||
### 1. Data Flow Architecture
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[User Views Page] --> B[View Tracking Middleware]
|
||||
B --> C[PageView Model]
|
||||
|
||||
D[Trending Calculation Job] --> E[Trending Algorithm]
|
||||
E --> F[Cache Layer]
|
||||
|
||||
G[Frontend Request] --> H[API Endpoints]
|
||||
H --> F
|
||||
F --> I[Serialized Response]
|
||||
I --> J[Frontend Display]
|
||||
|
||||
K[Management Command] --> D
|
||||
L[Celery/Cron Schedule] --> K
|
||||
```
|
||||
|
||||
### 2. Database Schema Design
|
||||
|
||||
#### PageView Model
|
||||
```python
|
||||
class PageView(models.Model):
|
||||
content_type = models.ForeignKey(ContentType)
|
||||
object_id = models.PositiveIntegerField()
|
||||
content_object = GenericForeignKey('content_type', 'object_id')
|
||||
|
||||
user_session = models.CharField(max_length=40)
|
||||
ip_address = models.GenericIPAddressField()
|
||||
user_agent = models.TextField()
|
||||
timestamp = models.DateTimeField(auto_now_add=True)
|
||||
|
||||
class Meta:
|
||||
indexes = [
|
||||
models.Index(fields=['content_type', 'object_id', 'timestamp']),
|
||||
models.Index(fields=['timestamp']),
|
||||
]
|
||||
```
|
||||
|
||||
### 3. Trending Algorithm
|
||||
|
||||
#### Calculation Components
|
||||
- **View Count Weight**: Recent page views (configurable time window)
|
||||
- **Rating Weight**: Average rating from Park/Ride models
|
||||
- **Recency Boost**: Recently added/updated content bonus
|
||||
- **Category Balancing**: Ensure diverse content across categories
|
||||
|
||||
#### Formula
|
||||
```
|
||||
Trending Score = (View Score × 0.4) + (Rating Score × 0.3) + (Recency Score × 0.2) + (Engagement Score × 0.1)
|
||||
```
|
||||
|
||||
### 4. API Endpoints Design
|
||||
|
||||
#### Trending Endpoint
|
||||
```
|
||||
GET /api/v1/trending/?tab={rides|parks|reviews}&limit=6
|
||||
```
|
||||
|
||||
#### New Content Endpoint
|
||||
```
|
||||
GET /api/v1/new-content/?tab={recently-added|newly-opened|upcoming}&limit=4
|
||||
```
|
||||
|
||||
### 5. Caching Strategy
|
||||
|
||||
#### Cache Keys
|
||||
- `trending_rides_6h`: Trending rides cache (6 hour TTL)
|
||||
- `trending_parks_6h`: Trending parks cache (6 hour TTL)
|
||||
- `new_content_24h`: New content cache (24 hour TTL)
|
||||
|
||||
#### Cache Invalidation
|
||||
- Manual refresh via management command
|
||||
- Automatic refresh on schedule
|
||||
- Cache warming during low-traffic periods
|
||||
|
||||
### 6. Performance Considerations
|
||||
|
||||
#### View Tracking Optimization
|
||||
- Async middleware for non-blocking view tracking
|
||||
- Batch insert for high-volume periods
|
||||
- IP-based rate limiting to prevent spam
|
||||
|
||||
#### Database Optimization
|
||||
- Proper indexing on PageView model
|
||||
- Aggregate tables for trending calculations
|
||||
- Periodic cleanup of old PageView records
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
The implementation follows the todo list with these key phases:
|
||||
|
||||
1. **Database Layer**: PageView model and migrations
|
||||
2. **Algorithm Design**: Trending calculation logic
|
||||
3. **API Layer**: New endpoints and serializers
|
||||
4. **Tracking System**: Middleware for view capture
|
||||
5. **Caching Layer**: Performance optimization
|
||||
6. **Automation**: Management commands and scheduling
|
||||
7. **Frontend Integration**: Replace mock data
|
||||
8. **Testing & Monitoring**: Comprehensive coverage
|
||||
|
||||
## Security & Privacy
|
||||
|
||||
- Anonymous view tracking (no personal data)
|
||||
- Session-based rate limiting
|
||||
- User agent validation
|
||||
- IP address anonymization options
|
||||
|
||||
## Monitoring & Analytics
|
||||
|
||||
- View tracking success rates
|
||||
- Trending calculation performance
|
||||
- Cache hit/miss ratios
|
||||
- API response times
|
||||
- Algorithm effectiveness metrics
|
||||
175
frontend/src/components/entity/AuthPrompt.vue
Normal file
175
frontend/src/components/entity/AuthPrompt.vue
Normal file
@@ -0,0 +1,175 @@
|
||||
<template>
|
||||
<div class="space-y-4">
|
||||
<div class="text-center">
|
||||
<div class="mb-4">
|
||||
<div
|
||||
class="mx-auto w-12 h-12 bg-blue-100 dark:bg-blue-900 rounded-full flex items-center justify-center"
|
||||
>
|
||||
<svg
|
||||
class="h-6 w-6 text-blue-600 dark:text-blue-400"
|
||||
fill="none"
|
||||
viewBox="0 0 24 24"
|
||||
stroke="currentColor"
|
||||
>
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="2"
|
||||
d="M12 15v2m-6 4h12a2 2 0 002-2v-6a2 2 0 00-2-2H6a2 2 0 00-2 2v6a2 2 0 002 2zm10-10V7a4 4 0 00-8 0v4h8z"
|
||||
/>
|
||||
</svg>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<h4 class="text-lg font-semibold text-gray-900 dark:text-white mb-2">
|
||||
Sign in to contribute
|
||||
</h4>
|
||||
|
||||
<p class="text-gray-600 dark:text-gray-400 mb-6">
|
||||
You need to be signed in to add "{{ searchTerm }}" to ThrillWiki's database. Join
|
||||
our community of theme park enthusiasts!
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<!-- Benefits List -->
|
||||
<div class="bg-gray-50 dark:bg-gray-700 rounded-lg p-4 space-y-3">
|
||||
<h5 class="font-medium text-gray-900 dark:text-white flex items-center gap-2">
|
||||
<svg
|
||||
class="h-5 w-5 text-green-600 dark:text-green-400"
|
||||
fill="none"
|
||||
viewBox="0 0 24 24"
|
||||
stroke="currentColor"
|
||||
>
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="2"
|
||||
d="M5 13l4 4L19 7"
|
||||
/>
|
||||
</svg>
|
||||
What you can do:
|
||||
</h5>
|
||||
<ul class="space-y-2 text-sm text-gray-600 dark:text-gray-400">
|
||||
<li class="flex items-start gap-2">
|
||||
<svg
|
||||
class="h-4 w-4 text-blue-500 mt-0.5 flex-shrink-0"
|
||||
fill="none"
|
||||
viewBox="0 0 24 24"
|
||||
stroke="currentColor"
|
||||
>
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="2"
|
||||
d="M12 6v6m0 0v6m0-6h6m-6 0H6"
|
||||
/>
|
||||
</svg>
|
||||
Add new parks, rides, and companies
|
||||
</li>
|
||||
<li class="flex items-start gap-2">
|
||||
<svg
|
||||
class="h-4 w-4 text-blue-500 mt-0.5 flex-shrink-0"
|
||||
fill="none"
|
||||
viewBox="0 0 24 24"
|
||||
stroke="currentColor"
|
||||
>
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="2"
|
||||
d="M11 5H6a2 2 0 00-2 2v11a2 2 0 002 2h11a2 2 0 002-2v-5m-1.414-9.414a2 2 0 112.828 2.828L11.828 15H9v-2.828l8.586-8.586z"
|
||||
/>
|
||||
</svg>
|
||||
Edit and improve existing entries
|
||||
</li>
|
||||
<li class="flex items-start gap-2">
|
||||
<svg
|
||||
class="h-4 w-4 text-blue-500 mt-0.5 flex-shrink-0"
|
||||
fill="none"
|
||||
viewBox="0 0 24 24"
|
||||
stroke="currentColor"
|
||||
>
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="2"
|
||||
d="M4.318 6.318a4.5 4.5 0 000 6.364L12 20.364l7.682-7.682a4.5 4.5 0 00-6.364-6.364L12 7.636l-1.318-1.318a4.5 4.5 0 00-6.364 0z"
|
||||
/>
|
||||
</svg>
|
||||
Save your favorite places
|
||||
</li>
|
||||
<li class="flex items-start gap-2">
|
||||
<svg
|
||||
class="h-4 w-4 text-blue-500 mt-0.5 flex-shrink-0"
|
||||
fill="none"
|
||||
viewBox="0 0 24 24"
|
||||
stroke="currentColor"
|
||||
>
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="2"
|
||||
d="M17 8h2a2 2 0 012 2v6a2 2 0 01-2 2h-2v4l-4-4H9a2 2 0 01-2-2v-6a2 2 0 012-2h8z"
|
||||
/>
|
||||
</svg>
|
||||
Share reviews and experiences
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<!-- Action Buttons -->
|
||||
<div class="flex flex-col sm:flex-row gap-3">
|
||||
<button
|
||||
@click="handleLogin"
|
||||
class="flex-1 bg-blue-600 text-white px-6 py-3 rounded-lg font-semibold hover:bg-blue-700 transition-colors focus:ring-2 focus:ring-blue-500 focus:ring-offset-2 focus:outline-none"
|
||||
>
|
||||
Sign In
|
||||
</button>
|
||||
<button
|
||||
@click="handleSignup"
|
||||
class="flex-1 bg-white dark:bg-gray-800 text-gray-700 dark:text-gray-300 px-6 py-3 rounded-lg font-semibold border border-gray-300 dark:border-gray-600 hover:bg-gray-50 dark:hover:bg-gray-700 transition-colors focus:ring-2 focus:ring-blue-500 focus:ring-offset-2 focus:outline-none"
|
||||
>
|
||||
Create Account
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<!-- Alternative Options -->
|
||||
<div class="text-center pt-4 border-t border-gray-200 dark:border-gray-700">
|
||||
<p class="text-sm text-gray-500 dark:text-gray-400 mb-3">
|
||||
Or continue exploring ThrillWiki
|
||||
</p>
|
||||
<button
|
||||
@click="handleBrowseExisting"
|
||||
class="text-blue-600 dark:text-blue-400 hover:text-blue-700 dark:hover:text-blue-300 text-sm font-medium transition-colors"
|
||||
>
|
||||
Browse existing entries →
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup lang="ts">
|
||||
interface Props {
|
||||
searchTerm: string;
|
||||
}
|
||||
|
||||
const props = defineProps<Props>();
|
||||
|
||||
const emit = defineEmits<{
|
||||
login: [];
|
||||
signup: [];
|
||||
browse: [];
|
||||
}>();
|
||||
|
||||
const handleLogin = () => {
|
||||
emit("login");
|
||||
};
|
||||
|
||||
const handleSignup = () => {
|
||||
emit("signup");
|
||||
};
|
||||
|
||||
const handleBrowseExisting = () => {
|
||||
emit("browse");
|
||||
};
|
||||
</script>
|
||||
194
frontend/src/components/entity/EntitySuggestionCard.vue
Normal file
194
frontend/src/components/entity/EntitySuggestionCard.vue
Normal file
@@ -0,0 +1,194 @@
|
||||
<template>
|
||||
<div
|
||||
class="group relative bg-gray-50 dark:bg-gray-700 rounded-lg p-4 hover:bg-gray-100 dark:hover:bg-gray-600 transition-colors cursor-pointer border border-gray-200 dark:border-gray-600"
|
||||
@click="handleSelect"
|
||||
>
|
||||
<!-- Entity Type Badge -->
|
||||
<div class="flex items-start justify-between mb-3">
|
||||
<div class="flex items-center gap-2">
|
||||
<span :class="entityTypeBadgeClasses">
|
||||
<component :is="entityIcon" class="h-4 w-4" />
|
||||
{{ entityTypeLabel }}
|
||||
</span>
|
||||
<span
|
||||
v-if="suggestion.confidence_score"
|
||||
:class="confidenceClasses"
|
||||
class="text-xs px-2 py-1 rounded-full font-medium"
|
||||
>
|
||||
{{ confidenceLabel }}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
<!-- Select Arrow -->
|
||||
<div class="opacity-0 group-hover:opacity-100 transition-opacity">
|
||||
<svg
|
||||
class="h-5 w-5 text-gray-400 dark:text-gray-500"
|
||||
fill="none"
|
||||
viewBox="0 0 24 24"
|
||||
stroke="currentColor"
|
||||
>
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="2"
|
||||
d="M9 5l7 7-7 7"
|
||||
/>
|
||||
</svg>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Entity Name -->
|
||||
<h4 class="text-lg font-semibold text-gray-900 dark:text-white mb-2">
|
||||
{{ suggestion.name }}
|
||||
</h4>
|
||||
|
||||
<!-- Entity Details -->
|
||||
<div class="space-y-2">
|
||||
<!-- Location for Parks -->
|
||||
<div
|
||||
v-if="suggestion.entity_type === 'park' && suggestion.location"
|
||||
class="flex items-center gap-2 text-sm text-gray-600 dark:text-gray-400"
|
||||
>
|
||||
<svg class="h-4 w-4" fill="none" viewBox="0 0 24 24" stroke="currentColor">
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="2"
|
||||
d="M17.657 16.657L13.414 20.9a1.998 1.998 0 01-2.827 0l-4.244-4.243a8 8 0 1111.314 0z"
|
||||
/>
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="2"
|
||||
d="M15 11a3 3 0 11-6 0 3 3 0 016 0z"
|
||||
/>
|
||||
</svg>
|
||||
{{ suggestion.location }}
|
||||
</div>
|
||||
|
||||
<!-- Park for Rides -->
|
||||
<div
|
||||
v-if="suggestion.entity_type === 'ride' && suggestion.park_name"
|
||||
class="flex items-center gap-2 text-sm text-gray-600 dark:text-gray-400"
|
||||
>
|
||||
<svg class="h-4 w-4" fill="none" viewBox="0 0 24 24" stroke="currentColor">
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="2"
|
||||
d="M19 21V5a2 2 0 00-2-2H7a2 2 0 00-2 2v16m14 0h2m-2 0h-5m-9 0H3m2 0h5M9 7h1m-1 4h1m4-4h1m-1 4h1m-5 10v-5a1 1 0 011-1h2a1 1 0 011 1v5m-4 0h4"
|
||||
/>
|
||||
</svg>
|
||||
At {{ suggestion.park_name }}
|
||||
</div>
|
||||
|
||||
<!-- Description -->
|
||||
<p
|
||||
v-if="suggestion.description"
|
||||
class="text-sm text-gray-600 dark:text-gray-400 line-clamp-2"
|
||||
>
|
||||
{{ suggestion.description }}
|
||||
</p>
|
||||
|
||||
<!-- Match Reason -->
|
||||
<div
|
||||
v-if="suggestion.match_reason"
|
||||
class="text-xs text-blue-600 dark:text-blue-400 bg-blue-50 dark:bg-blue-900/20 px-2 py-1 rounded"
|
||||
>
|
||||
{{ suggestion.match_reason }}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup lang="ts">
|
||||
import { computed } from "vue";
|
||||
import type { EntitySuggestion } from "../../services/api";
|
||||
|
||||
interface Props {
|
||||
suggestion: EntitySuggestion;
|
||||
}
|
||||
|
||||
const props = defineProps<Props>();
|
||||
|
||||
const emit = defineEmits<{
|
||||
select: [suggestion: EntitySuggestion];
|
||||
}>();
|
||||
|
||||
// Entity type configurations
|
||||
const entityTypeConfig = {
|
||||
park: {
|
||||
label: "Park",
|
||||
badgeClass: "bg-green-100 text-green-800 dark:bg-green-900 dark:text-green-200",
|
||||
icon: "BuildingStorefrontIcon",
|
||||
},
|
||||
ride: {
|
||||
label: "Ride",
|
||||
badgeClass: "bg-blue-100 text-blue-800 dark:bg-blue-900 dark:text-blue-200",
|
||||
icon: "SparklesIcon",
|
||||
},
|
||||
company: {
|
||||
label: "Company",
|
||||
badgeClass: "bg-purple-100 text-purple-800 dark:bg-purple-900 dark:text-purple-200",
|
||||
icon: "BuildingOfficeIcon",
|
||||
},
|
||||
};
|
||||
|
||||
// Computed properties
|
||||
const entityTypeLabel = computed(
|
||||
() => entityTypeConfig[props.suggestion.entity_type]?.label || "Entity"
|
||||
);
|
||||
|
||||
const entityTypeBadgeClasses = computed(() => {
|
||||
const baseClasses =
|
||||
"inline-flex items-center gap-1 px-2 py-1 rounded-full text-xs font-medium";
|
||||
const typeClasses =
|
||||
entityTypeConfig[props.suggestion.entity_type]?.badgeClass ||
|
||||
"bg-gray-100 text-gray-800 dark:bg-gray-700 dark:text-gray-200";
|
||||
return `${baseClasses} ${typeClasses}`;
|
||||
});
|
||||
|
||||
const confidenceLabel = computed(() => {
|
||||
const score = props.suggestion.confidence_score;
|
||||
if (score >= 0.8) return "High Match";
|
||||
if (score >= 0.6) return "Good Match";
|
||||
if (score >= 0.4) return "Possible Match";
|
||||
return "Low Match";
|
||||
});
|
||||
|
||||
const confidenceClasses = computed(() => {
|
||||
const score = props.suggestion.confidence_score;
|
||||
if (score >= 0.8)
|
||||
return "bg-green-100 text-green-700 dark:bg-green-900 dark:text-green-300";
|
||||
if (score >= 0.6)
|
||||
return "bg-yellow-100 text-yellow-700 dark:bg-yellow-900 dark:text-yellow-300";
|
||||
if (score >= 0.4)
|
||||
return "bg-orange-100 text-orange-700 dark:bg-orange-900 dark:text-orange-300";
|
||||
return "bg-red-100 text-red-700 dark:bg-red-900 dark:text-red-300";
|
||||
});
|
||||
|
||||
// Simple icon components
|
||||
const entityIcon = computed(() => {
|
||||
const type = props.suggestion.entity_type;
|
||||
|
||||
// Return appropriate icon component name or a default SVG
|
||||
if (type === "park") return "BuildingStorefrontIcon";
|
||||
if (type === "ride") return "SparklesIcon";
|
||||
if (type === "company") return "BuildingOfficeIcon";
|
||||
return "QuestionMarkCircleIcon";
|
||||
});
|
||||
|
||||
// Event handlers
|
||||
const handleSelect = () => {
|
||||
emit("select", props.suggestion);
|
||||
};
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
.line-clamp-2 {
|
||||
display: -webkit-box;
|
||||
-webkit-line-clamp: 2;
|
||||
-webkit-box-orient: vertical;
|
||||
overflow: hidden;
|
||||
}
|
||||
</style>
|
||||
235
frontend/src/components/entity/EntitySuggestionManager.vue
Normal file
235
frontend/src/components/entity/EntitySuggestionManager.vue
Normal file
@@ -0,0 +1,235 @@
|
||||
<template>
|
||||
<EntitySuggestionModal
|
||||
:show="showModal"
|
||||
:search-term="searchTerm"
|
||||
:suggestions="suggestions"
|
||||
:is-authenticated="isAuthenticated"
|
||||
@close="handleClose"
|
||||
@select-suggestion="handleSuggestionSelect"
|
||||
@add-entity="handleAddEntity"
|
||||
@login="handleLogin"
|
||||
@signup="handleSignup"
|
||||
/>
|
||||
|
||||
<!-- Authentication Manager -->
|
||||
<AuthManager
|
||||
:show="showAuthModal"
|
||||
:initial-mode="authMode"
|
||||
@close="handleAuthClose"
|
||||
@success="handleAuthSuccess"
|
||||
/>
|
||||
</template>
|
||||
|
||||
<script setup lang="ts">
|
||||
import { ref, computed, watch, readonly } from "vue";
|
||||
import { useRouter } from "vue-router";
|
||||
import { useAuth } from "../../composables/useAuth";
|
||||
import { ThrillWikiApi, type EntitySuggestion } from "../../services/api";
|
||||
import EntitySuggestionModal from "./EntitySuggestionModal.vue";
|
||||
import AuthManager from "../auth/AuthManager.vue";
|
||||
|
||||
interface Props {
|
||||
searchTerm: string;
|
||||
show?: boolean;
|
||||
entityTypes?: string[];
|
||||
parkContext?: string;
|
||||
maxSuggestions?: number;
|
||||
}
|
||||
|
||||
const props = withDefaults(defineProps<Props>(), {
|
||||
show: false,
|
||||
entityTypes: () => ["park", "ride", "company"],
|
||||
maxSuggestions: 5,
|
||||
});
|
||||
|
||||
const emit = defineEmits<{
|
||||
close: [];
|
||||
entitySelected: [entity: EntitySuggestion];
|
||||
entityAdded: [entityType: string, name: string];
|
||||
error: [message: string];
|
||||
}>();
|
||||
|
||||
// Dependencies
|
||||
const router = useRouter();
|
||||
const { user, isAuthenticated, login, signup } = useAuth();
|
||||
const api = new ThrillWikiApi();
|
||||
|
||||
// Reactive state
|
||||
const showModal = ref(props.show);
|
||||
const suggestions = ref<EntitySuggestion[]>([]);
|
||||
const loading = ref(false);
|
||||
const error = ref<string | null>(null);
|
||||
|
||||
// Authentication modal state
|
||||
const showAuthModal = ref(false);
|
||||
const authMode = ref<'login' | 'signup'>('login');
|
||||
|
||||
// Computed properties
|
||||
const hasValidSearchTerm = computed(() => {
|
||||
return props.searchTerm && props.searchTerm.trim().length > 0;
|
||||
});
|
||||
|
||||
// Watch for prop changes
|
||||
watch(
|
||||
() => props.show,
|
||||
(newShow) => {
|
||||
showModal.value = newShow;
|
||||
if (newShow && hasValidSearchTerm.value) {
|
||||
performFuzzySearch();
|
||||
}
|
||||
},
|
||||
{ immediate: true }
|
||||
);
|
||||
|
||||
watch(
|
||||
() => props.searchTerm,
|
||||
(newTerm) => {
|
||||
if (showModal.value && newTerm && newTerm.trim().length > 0) {
|
||||
performFuzzySearch();
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
// Methods
|
||||
const performFuzzySearch = async () => {
|
||||
if (!hasValidSearchTerm.value) {
|
||||
suggestions.value = [];
|
||||
return;
|
||||
}
|
||||
|
||||
loading.value = true;
|
||||
error.value = null;
|
||||
|
||||
try {
|
||||
const response = await api.entitySearch.fuzzySearch({
|
||||
query: props.searchTerm.trim(),
|
||||
entityTypes: props.entityTypes,
|
||||
parkContext: props.parkContext,
|
||||
maxResults: props.maxSuggestions,
|
||||
minConfidence: 0.3,
|
||||
});
|
||||
|
||||
suggestions.value = response.suggestions || [];
|
||||
} catch (err) {
|
||||
console.error("Fuzzy search failed:", err);
|
||||
error.value = "Failed to search for similar entities. Please try again.";
|
||||
suggestions.value = [];
|
||||
emit("error", error.value);
|
||||
} finally {
|
||||
loading.value = false;
|
||||
}
|
||||
};
|
||||
|
||||
const handleClose = () => {
|
||||
showModal.value = false;
|
||||
emit("close");
|
||||
};
|
||||
|
||||
const handleSuggestionSelect = (suggestion: EntitySuggestion) => {
|
||||
emit("entitySelected", suggestion);
|
||||
handleClose();
|
||||
|
||||
// Navigate to the selected entity
|
||||
navigateToEntity(suggestion);
|
||||
};
|
||||
|
||||
const handleAddEntity = async (entityType: string, name: string) => {
|
||||
try {
|
||||
// Emit event for parent to handle
|
||||
emit("entityAdded", entityType, name);
|
||||
|
||||
// For now, just close the modal
|
||||
// In a real implementation, this might navigate to an add entity form
|
||||
handleClose();
|
||||
|
||||
// You could also show a success message here
|
||||
console.log(`Entity creation initiated: ${entityType} - ${name}`);
|
||||
} catch (err) {
|
||||
console.error("Failed to initiate entity creation:", err);
|
||||
error.value = "Failed to initiate entity creation. Please try again.";
|
||||
emit("error", error.value);
|
||||
}
|
||||
};
|
||||
|
||||
const handleLogin = () => {
|
||||
authMode.value = 'login';
|
||||
showAuthModal.value = true;
|
||||
};
|
||||
|
||||
const handleSignup = () => {
|
||||
authMode.value = 'signup';
|
||||
showAuthModal.value = true;
|
||||
};
|
||||
|
||||
// Authentication modal handlers
|
||||
const handleAuthClose = () => {
|
||||
showAuthModal.value = false;
|
||||
};
|
||||
|
||||
const handleAuthSuccess = () => {
|
||||
showAuthModal.value = false;
|
||||
// Optionally refresh suggestions now that user is authenticated
|
||||
if (hasValidSearchTerm.value && showModal.value) {
|
||||
performFuzzySearch();
|
||||
}
|
||||
};
|
||||
|
||||
const navigateToEntity = (entity: EntitySuggestion) => {
|
||||
try {
|
||||
let route = "";
|
||||
|
||||
switch (entity.entity_type) {
|
||||
case "park":
|
||||
route = `/parks/${entity.slug}`;
|
||||
break;
|
||||
case "ride":
|
||||
if (entity.park_slug) {
|
||||
route = `/parks/${entity.park_slug}/rides/${entity.slug}`;
|
||||
} else {
|
||||
route = `/rides/${entity.slug}`;
|
||||
}
|
||||
break;
|
||||
case "company":
|
||||
route = `/companies/${entity.slug}`;
|
||||
break;
|
||||
default:
|
||||
console.warn(`Unknown entity type: ${entity.entity_type}`);
|
||||
return;
|
||||
}
|
||||
|
||||
router.push(route);
|
||||
} catch (err) {
|
||||
console.error("Failed to navigate to entity:", err);
|
||||
error.value = "Failed to navigate to the selected entity.";
|
||||
emit("error", error.value);
|
||||
}
|
||||
};
|
||||
|
||||
// Public methods for external control
|
||||
const show = () => {
|
||||
showModal.value = true;
|
||||
if (hasValidSearchTerm.value) {
|
||||
performFuzzySearch();
|
||||
}
|
||||
};
|
||||
|
||||
const hide = () => {
|
||||
showModal.value = false;
|
||||
};
|
||||
|
||||
const refresh = () => {
|
||||
if (showModal.value && hasValidSearchTerm.value) {
|
||||
performFuzzySearch();
|
||||
}
|
||||
};
|
||||
|
||||
// Expose methods for parent components
|
||||
defineExpose({
|
||||
show,
|
||||
hide,
|
||||
refresh,
|
||||
suggestions: readonly(suggestions),
|
||||
loading: readonly(loading),
|
||||
error: readonly(error),
|
||||
});
|
||||
</script>
|
||||
226
frontend/src/components/entity/EntitySuggestionModal.vue
Normal file
226
frontend/src/components/entity/EntitySuggestionModal.vue
Normal file
@@ -0,0 +1,226 @@
|
||||
<template>
|
||||
<Teleport to="body">
|
||||
<Transition
|
||||
enter-active-class="duration-300 ease-out"
|
||||
enter-from-class="opacity-0"
|
||||
enter-to-class="opacity-100"
|
||||
leave-active-class="duration-200 ease-in"
|
||||
leave-from-class="opacity-100"
|
||||
leave-to-class="opacity-0"
|
||||
>
|
||||
<div
|
||||
v-if="show"
|
||||
class="fixed inset-0 z-50 overflow-y-auto"
|
||||
@click="closeOnBackdrop && handleBackdropClick"
|
||||
>
|
||||
<!-- Backdrop -->
|
||||
<div class="fixed inset-0 bg-black/50 backdrop-blur-sm"></div>
|
||||
|
||||
<!-- Modal Container -->
|
||||
<div class="flex min-h-full items-center justify-center p-4">
|
||||
<Transition
|
||||
enter-active-class="duration-300 ease-out"
|
||||
enter-from-class="opacity-0 scale-95"
|
||||
enter-to-class="opacity-100 scale-100"
|
||||
leave-active-class="duration-200 ease-in"
|
||||
leave-from-class="opacity-100 scale-100"
|
||||
leave-to-class="opacity-0 scale-95"
|
||||
>
|
||||
<div
|
||||
v-if="show"
|
||||
class="relative w-full max-w-2xl transform overflow-hidden rounded-2xl bg-white dark:bg-gray-800 shadow-2xl transition-all"
|
||||
@click.stop
|
||||
>
|
||||
<!-- Header -->
|
||||
<div
|
||||
class="flex items-center justify-between p-6 pb-4 border-b border-gray-200 dark:border-gray-700"
|
||||
>
|
||||
<div>
|
||||
<h2 class="text-2xl font-bold text-gray-900 dark:text-white">
|
||||
Entity Not Found
|
||||
</h2>
|
||||
<p class="text-sm text-gray-600 dark:text-gray-400 mt-1">
|
||||
We couldn't find "{{ searchTerm }}" but here are some suggestions
|
||||
</p>
|
||||
</div>
|
||||
<button
|
||||
@click="$emit('close')"
|
||||
class="rounded-lg p-2 text-gray-400 hover:bg-gray-100 hover:text-gray-600 dark:hover:bg-gray-700 dark:hover:text-gray-300 transition-colors"
|
||||
>
|
||||
<svg
|
||||
class="h-5 w-5"
|
||||
fill="none"
|
||||
viewBox="0 0 24 24"
|
||||
stroke="currentColor"
|
||||
>
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="2"
|
||||
d="M6 18L18 6M6 6l12 12"
|
||||
/>
|
||||
</svg>
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<!-- Content -->
|
||||
<div class="px-6 pb-6">
|
||||
<!-- Suggestions Section -->
|
||||
<div v-if="suggestions.length > 0" class="mb-6">
|
||||
<h3 class="text-lg font-semibold text-gray-900 dark:text-white mb-4">
|
||||
Did you mean one of these?
|
||||
</h3>
|
||||
<div class="space-y-3">
|
||||
<EntitySuggestionCard
|
||||
v-for="suggestion in suggestions"
|
||||
:key="`${suggestion.entity_type}-${suggestion.slug}`"
|
||||
:suggestion="suggestion"
|
||||
@select="handleSuggestionSelect"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- No Suggestions / Add New Section -->
|
||||
<div class="border-t border-gray-200 dark:border-gray-700 pt-6">
|
||||
<h3 class="text-lg font-semibold text-gray-900 dark:text-white mb-4">
|
||||
Can't find what you're looking for?
|
||||
</h3>
|
||||
|
||||
<!-- Authenticated User - Add Entity -->
|
||||
<div v-if="isAuthenticated" class="space-y-4">
|
||||
<p class="text-gray-600 dark:text-gray-400">
|
||||
You can help improve ThrillWiki by adding this entity to our
|
||||
database.
|
||||
</p>
|
||||
<div class="flex gap-3">
|
||||
<button
|
||||
@click="handleAddEntity('park')"
|
||||
:disabled="loading"
|
||||
class="flex-1 bg-blue-600 text-white px-4 py-2 rounded-lg hover:bg-blue-700 disabled:opacity-50 disabled:cursor-not-allowed transition-colors"
|
||||
>
|
||||
Add as Park
|
||||
</button>
|
||||
<button
|
||||
@click="handleAddEntity('ride')"
|
||||
:disabled="loading"
|
||||
class="flex-1 bg-green-600 text-white px-4 py-2 rounded-lg hover:bg-green-700 disabled:opacity-50 disabled:cursor-not-allowed transition-colors"
|
||||
>
|
||||
Add as Ride
|
||||
</button>
|
||||
<button
|
||||
@click="handleAddEntity('company')"
|
||||
:disabled="loading"
|
||||
class="flex-1 bg-purple-600 text-white px-4 py-2 rounded-lg hover:bg-purple-700 disabled:opacity-50 disabled:cursor-not-allowed transition-colors"
|
||||
>
|
||||
Add as Company
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Unauthenticated User - Auth Prompt -->
|
||||
<AuthPrompt
|
||||
v-else
|
||||
:search-term="searchTerm"
|
||||
@login="handleLogin"
|
||||
@signup="handleSignup"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Loading Overlay -->
|
||||
<div
|
||||
v-if="loading"
|
||||
class="absolute inset-0 bg-white/80 dark:bg-gray-800/80 flex items-center justify-center rounded-2xl"
|
||||
>
|
||||
<div class="flex items-center gap-3">
|
||||
<div
|
||||
class="animate-spin rounded-full h-6 w-6 border-b-2 border-blue-600"
|
||||
></div>
|
||||
<span class="text-gray-700 dark:text-gray-300">{{
|
||||
loadingMessage
|
||||
}}</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</Transition>
|
||||
</div>
|
||||
</div>
|
||||
</Transition>
|
||||
</Teleport>
|
||||
</template>
|
||||
|
||||
<script setup lang="ts">
|
||||
import { ref, computed, toRefs, onUnmounted, watch } from "vue";
|
||||
import type { EntitySuggestion } from "../../services/api";
|
||||
import EntitySuggestionCard from "./EntitySuggestionCard.vue";
|
||||
import AuthPrompt from "./AuthPrompt.vue";
|
||||
|
||||
interface Props {
|
||||
show: boolean;
|
||||
searchTerm: string;
|
||||
suggestions: EntitySuggestion[];
|
||||
isAuthenticated: boolean;
|
||||
closeOnBackdrop?: boolean;
|
||||
}
|
||||
|
||||
const props = withDefaults(defineProps<Props>(), {
|
||||
closeOnBackdrop: true,
|
||||
});
|
||||
|
||||
const emit = defineEmits<{
|
||||
close: [];
|
||||
selectSuggestion: [suggestion: EntitySuggestion];
|
||||
addEntity: [entityType: string, name: string];
|
||||
login: [];
|
||||
signup: [];
|
||||
}>();
|
||||
|
||||
// Loading state
|
||||
const loading = ref(false);
|
||||
const loadingMessage = ref("");
|
||||
|
||||
const handleBackdropClick = (event: MouseEvent) => {
|
||||
if (props.closeOnBackdrop && event.target === event.currentTarget) {
|
||||
emit("close");
|
||||
}
|
||||
};
|
||||
|
||||
const handleSuggestionSelect = (suggestion: EntitySuggestion) => {
|
||||
emit("selectSuggestion", suggestion);
|
||||
};
|
||||
|
||||
const handleAddEntity = async (entityType: string) => {
|
||||
loading.value = true;
|
||||
loadingMessage.value = `Adding ${entityType}...`;
|
||||
|
||||
try {
|
||||
emit("addEntity", entityType, props.searchTerm);
|
||||
} finally {
|
||||
loading.value = false;
|
||||
loadingMessage.value = "";
|
||||
}
|
||||
};
|
||||
|
||||
const handleLogin = () => {
|
||||
emit("login");
|
||||
};
|
||||
|
||||
const handleSignup = () => {
|
||||
emit("signup");
|
||||
};
|
||||
|
||||
// Prevent body scroll when modal is open
|
||||
const { show } = toRefs(props);
|
||||
watch(show, (isShown) => {
|
||||
if (isShown) {
|
||||
document.body.style.overflow = "hidden";
|
||||
} else {
|
||||
document.body.style.overflow = "";
|
||||
}
|
||||
});
|
||||
|
||||
// Clean up on unmount
|
||||
onUnmounted(() => {
|
||||
document.body.style.overflow = "";
|
||||
});
|
||||
</script>
|
||||
7
frontend/src/components/entity/index.ts
Normal file
7
frontend/src/components/entity/index.ts
Normal file
@@ -0,0 +1,7 @@
|
||||
// Entity suggestion components
|
||||
export { default as EntitySuggestionModal } from './EntitySuggestionModal.vue'
|
||||
export { default as EntitySuggestionCard } from './EntitySuggestionCard.vue'
|
||||
export { default as AuthPrompt } from './AuthPrompt.vue'
|
||||
|
||||
// Main integration component
|
||||
export { default as EntitySuggestionManager } from './EntitySuggestionManager.vue'
|
||||
@@ -22,144 +22,160 @@
|
||||
</template>
|
||||
|
||||
<script setup lang="ts">
|
||||
import { computed } from 'vue'
|
||||
import { computed } from "vue";
|
||||
|
||||
interface CardProps {
|
||||
variant?: 'default' | 'outline' | 'ghost' | 'elevated'
|
||||
size?: 'sm' | 'md' | 'lg'
|
||||
title?: string
|
||||
padding?: 'none' | 'sm' | 'md' | 'lg'
|
||||
rounded?: 'none' | 'sm' | 'md' | 'lg' | 'xl'
|
||||
shadow?: 'none' | 'sm' | 'md' | 'lg' | 'xl'
|
||||
hover?: boolean
|
||||
interactive?: boolean
|
||||
variant?: "default" | "outline" | "ghost" | "elevated" | "featured";
|
||||
size?: "sm" | "md" | "lg" | "xl";
|
||||
title?: string;
|
||||
padding?: "none" | "sm" | "md" | "lg" | "xl";
|
||||
rounded?: "none" | "sm" | "md" | "lg" | "xl" | "2xl";
|
||||
shadow?: "none" | "sm" | "md" | "lg" | "xl" | "2xl";
|
||||
hover?: boolean;
|
||||
interactive?: boolean;
|
||||
bordered?: boolean;
|
||||
}
|
||||
|
||||
const props = withDefaults(defineProps<CardProps>(), {
|
||||
variant: 'default',
|
||||
size: 'md',
|
||||
padding: 'md',
|
||||
rounded: 'lg',
|
||||
shadow: 'sm',
|
||||
variant: "default",
|
||||
size: "md",
|
||||
padding: "md",
|
||||
rounded: "lg",
|
||||
shadow: "sm",
|
||||
hover: false,
|
||||
interactive: false,
|
||||
})
|
||||
bordered: true,
|
||||
});
|
||||
|
||||
// Base card classes
|
||||
const baseClasses =
|
||||
'bg-white dark:bg-gray-800 border-gray-200 dark:border-gray-700 transition-all duration-200'
|
||||
// Base card classes - Consistent background for both light and dark modes
|
||||
const baseClasses = "bg-white dark:bg-gray-800 transition-all duration-200 ease-in-out";
|
||||
|
||||
// Variant classes
|
||||
const variantClasses = computed(() => {
|
||||
const variants = {
|
||||
default: 'border',
|
||||
outline: 'border-2',
|
||||
ghost: 'border-0 bg-transparent dark:bg-transparent',
|
||||
elevated: 'border-0',
|
||||
}
|
||||
return variants[props.variant]
|
||||
})
|
||||
default: props.bordered ? "border border-gray-200 dark:border-gray-700" : "border-0",
|
||||
outline: "border-2 border-gray-300 dark:border-gray-600",
|
||||
ghost: "border-0 bg-transparent dark:bg-transparent",
|
||||
elevated: "border-0",
|
||||
featured:
|
||||
"border border-blue-200 dark:border-blue-800 bg-gradient-to-br from-blue-50 to-white dark:from-blue-950 dark:to-gray-800",
|
||||
};
|
||||
return variants[props.variant];
|
||||
});
|
||||
|
||||
// Shadow classes
|
||||
const shadowClasses = computed(() => {
|
||||
if (props.variant === 'ghost') return ''
|
||||
if (props.variant === "ghost") return "";
|
||||
|
||||
const shadows = {
|
||||
none: '',
|
||||
sm: 'shadow-sm',
|
||||
md: 'shadow-md',
|
||||
lg: 'shadow-lg',
|
||||
xl: 'shadow-xl',
|
||||
}
|
||||
return shadows[props.shadow]
|
||||
})
|
||||
none: "",
|
||||
sm: "shadow-sm hover:shadow-md",
|
||||
md: "shadow-md hover:shadow-lg",
|
||||
lg: "shadow-lg hover:shadow-xl",
|
||||
xl: "shadow-xl hover:shadow-2xl",
|
||||
"2xl": "shadow-2xl hover:shadow-2xl",
|
||||
};
|
||||
return shadows[props.shadow];
|
||||
});
|
||||
|
||||
// Rounded classes
|
||||
const roundedClasses = computed(() => {
|
||||
const rounded = {
|
||||
none: 'rounded-none',
|
||||
sm: 'rounded-sm',
|
||||
md: 'rounded-md',
|
||||
lg: 'rounded-lg',
|
||||
xl: 'rounded-xl',
|
||||
}
|
||||
return rounded[props.rounded]
|
||||
})
|
||||
none: "rounded-none",
|
||||
sm: "rounded-sm",
|
||||
md: "rounded-md",
|
||||
lg: "rounded-lg",
|
||||
xl: "rounded-xl",
|
||||
"2xl": "rounded-2xl",
|
||||
};
|
||||
return rounded[props.rounded];
|
||||
});
|
||||
|
||||
// Hover classes
|
||||
const hoverClasses = computed(() => {
|
||||
if (!props.hover && !props.interactive) return ''
|
||||
if (!props.hover && !props.interactive) return "";
|
||||
|
||||
let classes = ''
|
||||
if (props.hover) {
|
||||
classes += ' hover:shadow-md'
|
||||
if (props.variant !== 'ghost') {
|
||||
classes += ' hover:border-gray-300 dark:hover:border-gray-600'
|
||||
let classes = "";
|
||||
if (props.hover || props.interactive) {
|
||||
if (props.variant !== "ghost") {
|
||||
classes += " hover:border-gray-300 dark:hover:border-gray-600";
|
||||
}
|
||||
if (props.variant === "featured") {
|
||||
classes +=
|
||||
" hover:from-blue-100 hover:to-blue-50 dark:hover:from-blue-900 dark:hover:to-gray-700";
|
||||
}
|
||||
}
|
||||
|
||||
if (props.interactive) {
|
||||
classes += ' cursor-pointer hover:scale-[1.02] active:scale-[0.98]'
|
||||
classes +=
|
||||
" cursor-pointer hover:scale-[1.01] active:scale-[0.99] hover:-translate-y-0.5";
|
||||
}
|
||||
|
||||
return classes
|
||||
})
|
||||
return classes;
|
||||
});
|
||||
|
||||
// Padding classes for different sections
|
||||
const paddingClasses = computed(() => {
|
||||
const paddings = {
|
||||
none: '',
|
||||
sm: 'p-3',
|
||||
md: 'p-4',
|
||||
lg: 'p-6',
|
||||
}
|
||||
return paddings[props.padding]
|
||||
})
|
||||
none: "",
|
||||
sm: "p-3",
|
||||
md: "p-4",
|
||||
lg: "p-6",
|
||||
xl: "p-8",
|
||||
};
|
||||
return paddings[props.padding];
|
||||
});
|
||||
|
||||
const headerPadding = computed(() => {
|
||||
if (props.padding === 'none') return ''
|
||||
if (props.padding === "none") return "";
|
||||
const paddings = {
|
||||
sm: 'px-3 pt-3',
|
||||
md: 'px-4 pt-4',
|
||||
lg: 'px-6 pt-6',
|
||||
}
|
||||
return paddings[props.padding]
|
||||
})
|
||||
sm: "px-3 pt-3",
|
||||
md: "px-4 pt-4",
|
||||
lg: "px-6 pt-6",
|
||||
xl: "px-8 pt-8",
|
||||
};
|
||||
return paddings[props.padding];
|
||||
});
|
||||
|
||||
const contentPadding = computed(() => {
|
||||
if (props.padding === 'none') return ''
|
||||
if (props.padding === "none") return "";
|
||||
|
||||
const hasHeader = props.title || props.$slots?.header
|
||||
const hasFooter = props.$slots?.footer
|
||||
const hasHeader = props.title || props.$slots?.header;
|
||||
const hasFooter = props.$slots?.footer;
|
||||
|
||||
let classes = ''
|
||||
let classes = "";
|
||||
|
||||
if (props.padding === 'sm') {
|
||||
classes = 'px-3'
|
||||
if (!hasHeader) classes += ' pt-3'
|
||||
if (!hasFooter) classes += ' pb-3'
|
||||
} else if (props.padding === 'md') {
|
||||
classes = 'px-4'
|
||||
if (!hasHeader) classes += ' pt-4'
|
||||
if (!hasFooter) classes += ' pb-4'
|
||||
} else if (props.padding === 'lg') {
|
||||
classes = 'px-6'
|
||||
if (!hasHeader) classes += ' pt-6'
|
||||
if (!hasFooter) classes += ' pb-6'
|
||||
if (props.padding === "sm") {
|
||||
classes = "px-3";
|
||||
if (!hasHeader) classes += " pt-3";
|
||||
if (!hasFooter) classes += " pb-3";
|
||||
} else if (props.padding === "md") {
|
||||
classes = "px-4";
|
||||
if (!hasHeader) classes += " pt-4";
|
||||
if (!hasFooter) classes += " pb-4";
|
||||
} else if (props.padding === "lg") {
|
||||
classes = "px-6";
|
||||
if (!hasHeader) classes += " pt-6";
|
||||
if (!hasFooter) classes += " pb-6";
|
||||
} else if (props.padding === "xl") {
|
||||
classes = "px-8";
|
||||
if (!hasHeader) classes += " pt-8";
|
||||
if (!hasFooter) classes += " pb-8";
|
||||
}
|
||||
|
||||
return classes
|
||||
})
|
||||
return classes;
|
||||
});
|
||||
|
||||
const footerPadding = computed(() => {
|
||||
if (props.padding === 'none') return ''
|
||||
if (props.padding === "none") return "";
|
||||
const paddings = {
|
||||
sm: 'px-3 pb-3',
|
||||
md: 'px-4 pb-4',
|
||||
lg: 'px-6 pb-6',
|
||||
}
|
||||
return paddings[props.padding]
|
||||
})
|
||||
sm: "px-3 pb-3",
|
||||
md: "px-4 pb-4",
|
||||
lg: "px-6 pb-6",
|
||||
xl: "px-8 pb-8",
|
||||
};
|
||||
return paddings[props.padding];
|
||||
});
|
||||
|
||||
// Combined classes
|
||||
const cardClasses = computed(() => {
|
||||
@@ -169,38 +185,39 @@ const cardClasses = computed(() => {
|
||||
shadowClasses.value,
|
||||
roundedClasses.value,
|
||||
hoverClasses.value,
|
||||
props.padding === 'none' ? '' : '',
|
||||
props.padding === "none" ? "" : "",
|
||||
]
|
||||
.filter(Boolean)
|
||||
.join(' ')
|
||||
})
|
||||
.join(" ");
|
||||
});
|
||||
|
||||
const headerClasses = computed(() => {
|
||||
let classes = headerPadding.value
|
||||
if (props.padding !== 'none') {
|
||||
classes += ' border-b border-gray-200 dark:border-gray-700'
|
||||
let classes = headerPadding.value;
|
||||
if (props.padding !== "none") {
|
||||
classes += " border-b border-gray-200 dark:border-gray-700";
|
||||
}
|
||||
return classes
|
||||
})
|
||||
return classes;
|
||||
});
|
||||
|
||||
const contentClasses = computed(() => {
|
||||
return contentPadding.value
|
||||
})
|
||||
return contentPadding.value;
|
||||
});
|
||||
|
||||
const footerClasses = computed(() => {
|
||||
let classes = footerPadding.value
|
||||
if (props.padding !== 'none') {
|
||||
classes += ' border-t border-gray-200 dark:border-gray-700'
|
||||
let classes = footerPadding.value;
|
||||
if (props.padding !== "none") {
|
||||
classes += " border-t border-gray-200 dark:border-gray-700";
|
||||
}
|
||||
return classes
|
||||
})
|
||||
return classes;
|
||||
});
|
||||
|
||||
const titleClasses = computed(() => {
|
||||
const sizes = {
|
||||
sm: 'text-lg font-semibold',
|
||||
md: 'text-xl font-semibold',
|
||||
lg: 'text-2xl font-semibold',
|
||||
}
|
||||
return `${sizes[props.size]} text-gray-900 dark:text-gray-100`
|
||||
})
|
||||
sm: "text-lg font-semibold leading-6",
|
||||
md: "text-xl font-semibold leading-7",
|
||||
lg: "text-2xl font-semibold leading-8",
|
||||
xl: "text-3xl font-bold leading-9",
|
||||
};
|
||||
return `${sizes[props.size]} text-gray-900 dark:text-gray-100 tracking-tight`;
|
||||
});
|
||||
</script>
|
||||
|
||||
@@ -12,8 +12,58 @@ import type {
|
||||
PasswordResetRequest,
|
||||
PasswordChangeRequest,
|
||||
SocialAuthProvider,
|
||||
TrendingResponse,
|
||||
NewContentResponse,
|
||||
TrendingItem,
|
||||
NewContentItem,
|
||||
RideRanking,
|
||||
RideRankingDetail,
|
||||
HeadToHeadComparison,
|
||||
RankingSnapshot,
|
||||
RankingStatistics,
|
||||
} from '@/types'
|
||||
|
||||
// Entity fuzzy matching types
|
||||
export interface EntitySuggestion {
|
||||
name: string
|
||||
slug: string
|
||||
entity_type: 'park' | 'ride' | 'company'
|
||||
match_type: 'exact' | 'fuzzy' | 'partial'
|
||||
confidence_score: number
|
||||
additional_info?: {
|
||||
park_name?: string
|
||||
opened_date?: string
|
||||
location?: string
|
||||
}
|
||||
}
|
||||
|
||||
export interface FuzzyMatchResult {
|
||||
query: string
|
||||
exact_matches: EntitySuggestion[]
|
||||
fuzzy_matches: EntitySuggestion[]
|
||||
suggestions: EntitySuggestion[]
|
||||
total_matches: number
|
||||
search_time_ms: number
|
||||
authentication_required: boolean
|
||||
can_add_entity: boolean
|
||||
}
|
||||
|
||||
export interface EntityNotFoundResponse {
|
||||
entity_name: string
|
||||
entity_type?: string
|
||||
suggestions: EntitySuggestion[]
|
||||
can_add_entity: boolean
|
||||
authentication_required: boolean
|
||||
add_entity_url?: string
|
||||
login_url: string
|
||||
signup_url: string
|
||||
}
|
||||
|
||||
export interface QuickSuggestionResponse {
|
||||
suggestions: EntitySuggestion[]
|
||||
total_available: number
|
||||
}
|
||||
|
||||
// History-specific types
|
||||
export interface HistoryEvent {
|
||||
id: string
|
||||
@@ -311,7 +361,7 @@ export class ParksApi {
|
||||
count: number
|
||||
days: number
|
||||
parks: Park[]
|
||||
}>('/api/v1/parks/recent_changes/', params)
|
||||
}>('/api/parks/recent_changes/', params)
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -328,7 +378,7 @@ export class ParksApi {
|
||||
count: number
|
||||
days: number
|
||||
parks: Park[]
|
||||
}>('/api/v1/parks/recent_openings/', params)
|
||||
}>('/api/parks/recent_openings/', params)
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -345,7 +395,7 @@ export class ParksApi {
|
||||
count: number
|
||||
days: number
|
||||
parks: Park[]
|
||||
}>('/api/v1/parks/recent_closures/', params)
|
||||
}>('/api/parks/recent_closures/', params)
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -362,12 +412,12 @@ export class ParksApi {
|
||||
count: number
|
||||
days: number
|
||||
parks: Park[]
|
||||
}>('/api/v1/parks/recent_name_changes/', params)
|
||||
}>('/api/parks/recent_name_changes/', params)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Rides API service
|
||||
* Rides API service with context-aware endpoint selection
|
||||
*/
|
||||
export class RidesApi {
|
||||
private client: ApiClient
|
||||
@@ -377,7 +427,22 @@ export class RidesApi {
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all rides with pagination
|
||||
* Choose appropriate endpoint based on context
|
||||
* @param parkSlug - If provided, use nested endpoint for park context
|
||||
* @param operation - The operation being performed
|
||||
* @returns The appropriate base URL
|
||||
*/
|
||||
private getEndpointUrl(parkSlug?: string, operation: 'list' | 'detail' | 'search' = 'list'): string {
|
||||
if (parkSlug && (operation === 'list' || operation === 'detail')) {
|
||||
// Use nested endpoint for park-contextual operations
|
||||
return `/api/parks/${parkSlug}/rides`
|
||||
}
|
||||
// Use global endpoint for cross-park operations or when no park context
|
||||
return '/api/rides'
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all rides with pagination - uses global endpoint for cross-park view
|
||||
*/
|
||||
async getRides(params?: {
|
||||
page?: number
|
||||
@@ -390,28 +455,52 @@ export class RidesApi {
|
||||
if (params?.search) queryParams.search = params.search
|
||||
if (params?.ordering) queryParams.ordering = params.ordering
|
||||
|
||||
return this.client.get<ApiResponse<Ride>>('/api/rides/', queryParams)
|
||||
return this.client.get<ApiResponse<Ride>>(`${this.getEndpointUrl()}/`, queryParams)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a single ride by park and ride slug
|
||||
* Get rides for a specific park - uses nested endpoint for park context
|
||||
*/
|
||||
async getRidesByPark(parkSlug: string, params?: {
|
||||
page?: number
|
||||
search?: string
|
||||
ordering?: string
|
||||
}): Promise<ApiResponse<Ride>> {
|
||||
const queryParams: Record<string, string> = {}
|
||||
|
||||
if (params?.page) queryParams.page = params.page.toString()
|
||||
if (params?.search) queryParams.search = params.search
|
||||
if (params?.ordering) queryParams.ordering = params.ordering
|
||||
|
||||
return this.client.get<ApiResponse<Ride>>(`${this.getEndpointUrl(parkSlug, 'list')}/`, queryParams)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a single ride by park and ride slug - uses nested endpoint for park context
|
||||
*/
|
||||
async getRide(parkSlug: string, rideSlug: string): Promise<Ride> {
|
||||
return this.client.get<Ride>(`/api/rides/${parkSlug}/${rideSlug}/`)
|
||||
return this.client.get<Ride>(`${this.getEndpointUrl(parkSlug, 'detail')}/${rideSlug}/`)
|
||||
}
|
||||
|
||||
/**
|
||||
* Search rides
|
||||
* Get a ride by global ID (fallback method) - uses global endpoint
|
||||
*/
|
||||
async getRideById(rideId: string): Promise<Ride> {
|
||||
return this.client.get<Ride>(`${this.getEndpointUrl()}/${rideId}/`)
|
||||
}
|
||||
|
||||
/**
|
||||
* Search rides globally - uses global endpoint for cross-park search
|
||||
*/
|
||||
async searchRides(query: string): Promise<SearchResponse<Ride>> {
|
||||
return this.client.get<SearchResponse<Ride>>('/api/rides/search/', { q: query })
|
||||
return this.client.get<SearchResponse<Ride>>(`${this.getEndpointUrl()}/search/`, { q: query })
|
||||
}
|
||||
|
||||
/**
|
||||
* Get rides by park
|
||||
* Search rides within a specific park - uses nested endpoint for park context
|
||||
*/
|
||||
async getRidesByPark(parkSlug: string): Promise<SearchResponse<Ride>> {
|
||||
return this.client.get<SearchResponse<Ride>>(`/api/rides/by-park/${parkSlug}/`)
|
||||
async searchRidesInPark(parkSlug: string, query: string): Promise<SearchResponse<Ride>> {
|
||||
return this.client.get<SearchResponse<Ride>>(`${this.getEndpointUrl(parkSlug, 'search')}/search/`, { q: query })
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -435,7 +524,7 @@ export class RidesApi {
|
||||
}
|
||||
|
||||
/**
|
||||
* Get recently changed rides
|
||||
* Get recently changed rides - uses global endpoint for cross-park analysis
|
||||
*/
|
||||
async getRecentChanges(days?: number): Promise<{
|
||||
count: number
|
||||
@@ -448,11 +537,11 @@ export class RidesApi {
|
||||
count: number
|
||||
days: number
|
||||
rides: Ride[]
|
||||
}>('/api/v1/rides/recent_changes/', params)
|
||||
}>('/api/rides/recent_changes/', params)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get recently opened rides
|
||||
* Get recently opened rides - uses global endpoint for cross-park analysis
|
||||
*/
|
||||
async getRecentOpenings(days?: number): Promise<{
|
||||
count: number
|
||||
@@ -465,11 +554,11 @@ export class RidesApi {
|
||||
count: number
|
||||
days: number
|
||||
rides: Ride[]
|
||||
}>('/api/v1/rides/recent_openings/', params)
|
||||
}>('/api/rides/recent_openings/', params)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get recently closed rides
|
||||
* Get recently closed rides - uses global endpoint for cross-park analysis
|
||||
*/
|
||||
async getRecentClosures(days?: number): Promise<{
|
||||
count: number
|
||||
@@ -482,11 +571,11 @@ export class RidesApi {
|
||||
count: number
|
||||
days: number
|
||||
rides: Ride[]
|
||||
}>('/api/v1/rides/recent_closures/', params)
|
||||
}>('/api/rides/recent_closures/', params)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get rides with recent name changes
|
||||
* Get rides with recent name changes - uses global endpoint for cross-park analysis
|
||||
*/
|
||||
async getRecentNameChanges(days?: number): Promise<{
|
||||
count: number
|
||||
@@ -499,11 +588,11 @@ export class RidesApi {
|
||||
count: number
|
||||
days: number
|
||||
rides: Ride[]
|
||||
}>('/api/v1/rides/recent_name_changes/', params)
|
||||
}>('/api/rides/recent_name_changes/', params)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get rides that have been relocated recently
|
||||
* Get rides that have been relocated recently - uses global endpoint for cross-park analysis
|
||||
*/
|
||||
async getRecentRelocations(days?: number): Promise<{
|
||||
count: number
|
||||
@@ -516,7 +605,7 @@ export class RidesApi {
|
||||
count: number
|
||||
days: number
|
||||
rides: Ride[]
|
||||
}>('/api/v1/rides/recent_relocations/', params)
|
||||
}>('/api/rides/recent_relocations/', params)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -530,11 +619,25 @@ export class AuthApi {
|
||||
this.client = client
|
||||
}
|
||||
|
||||
/**
|
||||
* Set authentication token
|
||||
*/
|
||||
setAuthToken(token: string | null): void {
|
||||
this.client.setAuthToken(token)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get authentication token
|
||||
*/
|
||||
getAuthToken(): string | null {
|
||||
return this.client.getAuthToken()
|
||||
}
|
||||
|
||||
/**
|
||||
* Login with username/email and password
|
||||
*/
|
||||
async login(credentials: LoginCredentials): Promise<AuthResponse> {
|
||||
const response = await this.client.post<AuthResponse>('/api/accounts/login/', credentials)
|
||||
const response = await this.client.post<AuthResponse>('/api/auth/login/', credentials)
|
||||
if (response.token) {
|
||||
this.client.setAuthToken(response.token)
|
||||
}
|
||||
@@ -620,7 +723,7 @@ export class HistoryApi {
|
||||
if (params?.model_type) queryParams.model_type = params.model_type
|
||||
if (params?.significance) queryParams.significance = params.significance
|
||||
|
||||
return this.client.get<UnifiedHistoryTimeline>('/api/v1/history/timeline/', queryParams)
|
||||
return this.client.get<UnifiedHistoryTimeline>('/api/history/timeline/', queryParams)
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -635,14 +738,14 @@ export class HistoryApi {
|
||||
if (params?.start_date) queryParams.start_date = params.start_date
|
||||
if (params?.end_date) queryParams.end_date = params.end_date
|
||||
|
||||
return this.client.get<HistoryEvent[]>(`/api/v1/parks/${parkSlug}/history/`, queryParams)
|
||||
return this.client.get<HistoryEvent[]>(`/api/parks/${parkSlug}/history/`, queryParams)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get complete park history with current state and summary
|
||||
*/
|
||||
async getParkHistoryDetail(parkSlug: string): Promise<ParkHistoryResponse> {
|
||||
return this.client.get<ParkHistoryResponse>(`/api/v1/parks/${parkSlug}/history/detail/`)
|
||||
return this.client.get<ParkHistoryResponse>(`/api/parks/${parkSlug}/history/detail/`)
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -662,7 +765,7 @@ export class HistoryApi {
|
||||
if (params?.end_date) queryParams.end_date = params.end_date
|
||||
|
||||
return this.client.get<HistoryEvent[]>(
|
||||
`/api/v1/parks/${parkSlug}/rides/${rideSlug}/history/`,
|
||||
`/api/parks/${parkSlug}/rides/${rideSlug}/history/`,
|
||||
queryParams
|
||||
)
|
||||
}
|
||||
@@ -672,7 +775,7 @@ export class HistoryApi {
|
||||
*/
|
||||
async getRideHistoryDetail(parkSlug: string, rideSlug: string): Promise<RideHistoryResponse> {
|
||||
return this.client.get<RideHistoryResponse>(
|
||||
`/api/v1/parks/${parkSlug}/rides/${rideSlug}/history/detail/`
|
||||
`/api/parks/${parkSlug}/rides/${rideSlug}/history/detail/`
|
||||
)
|
||||
}
|
||||
|
||||
@@ -726,6 +829,300 @@ export class HistoryApi {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Trending API service for trending content and new content
|
||||
*/
|
||||
export class TrendingApi {
|
||||
private client: ApiClient
|
||||
|
||||
constructor(client: ApiClient = new ApiClient()) {
|
||||
this.client = client
|
||||
}
|
||||
|
||||
/**
|
||||
* Get trending content (rides, parks, reviews)
|
||||
*/
|
||||
async getTrendingContent(): Promise<TrendingResponse> {
|
||||
return this.client.get<TrendingResponse>('/api/trending/content/')
|
||||
}
|
||||
|
||||
/**
|
||||
* Get new content (recently added, newly opened, upcoming)
|
||||
*/
|
||||
async getNewContent(): Promise<NewContentResponse> {
|
||||
return this.client.get<NewContentResponse>('/api/trending/new/')
|
||||
}
|
||||
|
||||
/**
|
||||
* Get trending rides specifically
|
||||
*/
|
||||
async getTrendingRides(): Promise<TrendingItem[]> {
|
||||
const response = await this.getTrendingContent()
|
||||
return response.trending_rides
|
||||
}
|
||||
|
||||
/**
|
||||
* Get trending parks specifically
|
||||
*/
|
||||
async getTrendingParks(): Promise<TrendingItem[]> {
|
||||
const response = await this.getTrendingContent()
|
||||
return response.trending_parks
|
||||
}
|
||||
|
||||
/**
|
||||
* Get latest reviews specifically
|
||||
*/
|
||||
async getLatestReviews(): Promise<TrendingItem[]> {
|
||||
const response = await this.getTrendingContent()
|
||||
return response.latest_reviews
|
||||
}
|
||||
|
||||
/**
|
||||
* Get recently added content specifically
|
||||
*/
|
||||
async getRecentlyAdded(): Promise<NewContentItem[]> {
|
||||
const response = await this.getNewContent()
|
||||
return response.recently_added
|
||||
}
|
||||
|
||||
/**
|
||||
* Get newly opened content specifically
|
||||
*/
|
||||
async getNewlyOpened(): Promise<NewContentItem[]> {
|
||||
const response = await this.getNewContent()
|
||||
return response.newly_opened
|
||||
}
|
||||
|
||||
/**
|
||||
* Get upcoming content specifically
|
||||
*/
|
||||
async getUpcoming(): Promise<NewContentItem[]> {
|
||||
const response = await this.getNewContent()
|
||||
return response.upcoming
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Ride Rankings API service
|
||||
*/
|
||||
export class RankingsApi {
|
||||
private client: ApiClient
|
||||
|
||||
constructor(client: ApiClient = new ApiClient()) {
|
||||
this.client = client
|
||||
}
|
||||
|
||||
/**
|
||||
* Get paginated list of ride rankings
|
||||
*/
|
||||
async getRankings(params?: {
|
||||
page?: number
|
||||
page_size?: number
|
||||
category?: 'RC' | 'DR' | 'FR' | 'WR' | 'TR' | 'OT'
|
||||
min_riders?: number
|
||||
park?: string
|
||||
ordering?: 'rank' | '-rank' | 'winning_percentage' | '-winning_percentage'
|
||||
}): Promise<ApiResponse<RideRanking>> {
|
||||
const queryParams: Record<string, string> = {}
|
||||
|
||||
if (params?.page) queryParams.page = params.page.toString()
|
||||
if (params?.page_size) queryParams.page_size = params.page_size.toString()
|
||||
if (params?.category) queryParams.category = params.category
|
||||
if (params?.min_riders) queryParams.min_riders = params.min_riders.toString()
|
||||
if (params?.park) queryParams.park = params.park
|
||||
if (params?.ordering) queryParams.ordering = params.ordering
|
||||
|
||||
return this.client.get<ApiResponse<RideRanking>>('/api/rankings/', queryParams)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get detailed ranking information for a specific ride
|
||||
*/
|
||||
async getRankingDetail(rideSlug: string): Promise<RideRankingDetail> {
|
||||
return this.client.get<RideRankingDetail>(`/api/rankings/${rideSlug}/`)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get historical ranking data for a specific ride (last 90 days)
|
||||
*/
|
||||
async getRankingHistory(rideSlug: string): Promise<RankingSnapshot[]> {
|
||||
return this.client.get<RankingSnapshot[]>(`/api/rankings/${rideSlug}/history/`)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get head-to-head comparison data for a ride
|
||||
*/
|
||||
async getHeadToHeadComparisons(rideSlug: string): Promise<HeadToHeadComparison[]> {
|
||||
return this.client.get<HeadToHeadComparison[]>(`/api/rankings/${rideSlug}/comparisons/`)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get system-wide ranking statistics
|
||||
*/
|
||||
async getRankingStatistics(): Promise<RankingStatistics> {
|
||||
return this.client.get<RankingStatistics>('/api/rankings/statistics/')
|
||||
}
|
||||
|
||||
/**
|
||||
* Trigger ranking calculation (Admin only)
|
||||
*/
|
||||
async calculateRankings(category?: string): Promise<{
|
||||
status: string
|
||||
rides_ranked: number
|
||||
comparisons_made: number
|
||||
duration: number
|
||||
timestamp: string
|
||||
}> {
|
||||
const data = category ? { category } : {}
|
||||
return this.client.post<{
|
||||
status: string
|
||||
rides_ranked: number
|
||||
comparisons_made: number
|
||||
duration: number
|
||||
timestamp: string
|
||||
}>('/api/rankings/calculate/', data)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get top N rankings (convenience method)
|
||||
*/
|
||||
async getTopRankings(limit: number = 10, category?: string): Promise<RideRanking[]> {
|
||||
const response = await this.getRankings({
|
||||
page_size: limit,
|
||||
category: category as any,
|
||||
ordering: 'rank'
|
||||
})
|
||||
return response.results
|
||||
}
|
||||
|
||||
/**
|
||||
* Get rankings for a specific park
|
||||
*/
|
||||
async getParkRankings(parkSlug: string, params?: {
|
||||
page?: number
|
||||
page_size?: number
|
||||
category?: string
|
||||
}): Promise<ApiResponse<RideRanking>> {
|
||||
return this.getRankings({
|
||||
...params,
|
||||
park: parkSlug,
|
||||
category: params?.category as any
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
* Search rankings by ride name
|
||||
*/
|
||||
async searchRankings(query: string): Promise<RideRanking[]> {
|
||||
// This would ideally have a dedicated search endpoint, but for now
|
||||
// we'll fetch all and filter client-side (not ideal for large datasets)
|
||||
const response = await this.getRankings({ page_size: 100 })
|
||||
return response.results.filter(ranking =>
|
||||
ranking.ride.name.toLowerCase().includes(query.toLowerCase())
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get rank change information for a ride
|
||||
*/
|
||||
async getRankChange(rideSlug: string): Promise<{
|
||||
current_rank: number
|
||||
previous_rank: number | null
|
||||
change: number
|
||||
direction: 'up' | 'down' | 'same'
|
||||
}> {
|
||||
const detail = await this.getRankingDetail(rideSlug)
|
||||
const change = detail.rank_change || 0
|
||||
return {
|
||||
current_rank: detail.rank,
|
||||
previous_rank: detail.previous_rank,
|
||||
change: Math.abs(change),
|
||||
direction: change > 0 ? 'down' : change < 0 ? 'up' : 'same'
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Entity Search API service for fuzzy matching and suggestions
|
||||
*/
|
||||
export class EntitySearchApi {
|
||||
private client: ApiClient
|
||||
|
||||
constructor(client: ApiClient = new ApiClient()) {
|
||||
this.client = client
|
||||
}
|
||||
|
||||
/**
|
||||
* Perform fuzzy search for entities
|
||||
*/
|
||||
async fuzzySearch(query: string, params?: {
|
||||
entity_types?: string[]
|
||||
context_park?: string
|
||||
limit?: number
|
||||
min_confidence?: number
|
||||
}): Promise<FuzzyMatchResult> {
|
||||
const queryParams: Record<string, string> = { q: query }
|
||||
|
||||
if (params?.entity_types) queryParams.entity_types = params.entity_types.join(',')
|
||||
if (params?.context_park) queryParams.context_park = params.context_park
|
||||
if (params?.limit) queryParams.limit = params.limit.toString()
|
||||
if (params?.min_confidence) queryParams.min_confidence = params.min_confidence.toString()
|
||||
|
||||
return this.client.get<FuzzyMatchResult>('/api/entities/fuzzy-search/', queryParams)
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle entity not found scenarios with suggestions
|
||||
*/
|
||||
async handleNotFound(entityName: string, entityType?: string, context?: {
|
||||
park_slug?: string
|
||||
path?: string
|
||||
}): Promise<EntityNotFoundResponse> {
|
||||
const data: Record<string, any> = { entity_name: entityName }
|
||||
|
||||
if (entityType) data.entity_type = entityType
|
||||
if (context?.park_slug) data.park_slug = context.park_slug
|
||||
if (context?.path) data.path = context.path
|
||||
|
||||
return this.client.post<EntityNotFoundResponse>('/api/entities/not-found/', data)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get quick suggestions for autocomplete
|
||||
*/
|
||||
async getQuickSuggestions(query: string, params?: {
|
||||
entity_types?: string[]
|
||||
limit?: number
|
||||
park_context?: string
|
||||
}): Promise<QuickSuggestionResponse> {
|
||||
const queryParams: Record<string, string> = { q: query }
|
||||
|
||||
if (params?.entity_types) queryParams.entity_types = params.entity_types.join(',')
|
||||
if (params?.limit) queryParams.limit = params.limit.toString()
|
||||
if (params?.park_context) queryParams.park_context = params.park_context
|
||||
|
||||
return this.client.get<QuickSuggestionResponse>('/api/entities/quick-suggestions/', queryParams)
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if user is authenticated and can add entities
|
||||
*/
|
||||
isAuthenticated(): boolean {
|
||||
return !!this.client.getAuthToken()
|
||||
}
|
||||
|
||||
/**
|
||||
* Get authentication URLs for redirects
|
||||
*/
|
||||
getAuthUrls(): { login: string; signup: string } {
|
||||
const baseUrl = this.client['baseUrl']
|
||||
return {
|
||||
login: `${baseUrl}/auth/login/`,
|
||||
signup: `${baseUrl}/auth/signup/`
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Main API service that combines all endpoints
|
||||
*/
|
||||
@@ -734,6 +1131,9 @@ export class ThrillWikiApi {
|
||||
public rides: RidesApi
|
||||
public auth: AuthApi
|
||||
public history: HistoryApi
|
||||
public trending: TrendingApi
|
||||
public rankings: RankingsApi
|
||||
public entitySearch: EntitySearchApi
|
||||
private client: ApiClient
|
||||
|
||||
constructor() {
|
||||
@@ -742,6 +1142,9 @@ export class ThrillWikiApi {
|
||||
this.rides = new RidesApi(this.client)
|
||||
this.auth = new AuthApi(this.client)
|
||||
this.history = new HistoryApi(this.client)
|
||||
this.trending = new TrendingApi(this.client)
|
||||
this.rankings = new RankingsApi(this.client)
|
||||
this.entitySearch = new EntitySearchApi(this.client)
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -785,6 +1188,9 @@ export const parksApi = api.parks
|
||||
export const ridesApi = api.rides
|
||||
export const authApi = api.auth
|
||||
export const historyApi = api.history
|
||||
export const trendingApi = api.trending
|
||||
export const rankingsApi = api.rankings
|
||||
export const entitySearchApi = api.entitySearch
|
||||
|
||||
// Export types for use in components
|
||||
export type {
|
||||
@@ -796,5 +1202,9 @@ export type {
|
||||
ParkHistoryResponse,
|
||||
RideHistoryResponse,
|
||||
UnifiedHistoryTimeline,
|
||||
HistoryParams
|
||||
HistoryParams,
|
||||
EntitySuggestion,
|
||||
FuzzyMatchResult,
|
||||
EntityNotFoundResponse,
|
||||
QuickSuggestionResponse
|
||||
}
|
||||
|
||||
@@ -179,3 +179,163 @@ export interface SocialAuthProvider {
|
||||
iconUrl?: string
|
||||
authUrl: string
|
||||
}
|
||||
|
||||
// Trending and New Content types
|
||||
export interface TrendingItem {
|
||||
id: number
|
||||
name: string
|
||||
location: string
|
||||
category: string
|
||||
rating: number
|
||||
rank: number
|
||||
views: number
|
||||
views_change: number
|
||||
slug: string
|
||||
}
|
||||
|
||||
export interface NewContentItem {
|
||||
id: number
|
||||
name: string
|
||||
location: string
|
||||
category: string
|
||||
date_added: string
|
||||
slug: string
|
||||
}
|
||||
|
||||
export interface TrendingResponse {
|
||||
trending_rides: TrendingItem[]
|
||||
trending_parks: TrendingItem[]
|
||||
latest_reviews: TrendingItem[]
|
||||
}
|
||||
|
||||
export interface NewContentResponse {
|
||||
recently_added: NewContentItem[]
|
||||
newly_opened: NewContentItem[]
|
||||
upcoming: NewContentItem[]
|
||||
}
|
||||
|
||||
// Enhanced Park and Ride interfaces for trending support
|
||||
export interface ParkWithTrending extends Park {
|
||||
rating?: number
|
||||
rank?: number
|
||||
views?: number
|
||||
views_change?: number
|
||||
date_added?: string
|
||||
}
|
||||
|
||||
export interface RideWithTrending extends Ride {
|
||||
rating?: number
|
||||
rank?: number
|
||||
views?: number
|
||||
views_change?: number
|
||||
date_added?: string
|
||||
}
|
||||
|
||||
// Ride Ranking types
|
||||
export interface RideRanking {
|
||||
id: number
|
||||
rank: number
|
||||
ride: {
|
||||
id: number
|
||||
name: string
|
||||
slug: string
|
||||
park: {
|
||||
id: number
|
||||
name: string
|
||||
slug: string
|
||||
}
|
||||
category: 'RC' | 'DR' | 'FR' | 'WR' | 'TR' | 'OT'
|
||||
}
|
||||
wins: number
|
||||
losses: number
|
||||
ties: number
|
||||
winning_percentage: number
|
||||
mutual_riders_count: number
|
||||
comparison_count: number
|
||||
average_rating: number
|
||||
last_calculated: string
|
||||
rank_change?: number
|
||||
previous_rank?: number | null
|
||||
}
|
||||
|
||||
export interface RideRankingDetail extends RideRanking {
|
||||
ride: {
|
||||
id: number
|
||||
name: string
|
||||
slug: string
|
||||
description?: string
|
||||
park: {
|
||||
id: number
|
||||
name: string
|
||||
slug: string
|
||||
location?: {
|
||||
city?: string
|
||||
state?: string
|
||||
country?: string
|
||||
}
|
||||
}
|
||||
category: 'RC' | 'DR' | 'FR' | 'WR' | 'TR' | 'OT'
|
||||
manufacturer?: {
|
||||
id: number
|
||||
name: string
|
||||
}
|
||||
opening_date?: string
|
||||
status: string
|
||||
}
|
||||
calculation_version?: string
|
||||
head_to_head_comparisons?: HeadToHeadComparison[]
|
||||
ranking_history?: RankingSnapshot[]
|
||||
}
|
||||
|
||||
export interface HeadToHeadComparison {
|
||||
opponent: {
|
||||
id: number
|
||||
name: string
|
||||
slug: string
|
||||
park: string
|
||||
}
|
||||
wins: number
|
||||
losses: number
|
||||
ties: number
|
||||
result: 'win' | 'loss' | 'tie'
|
||||
mutual_riders: number
|
||||
}
|
||||
|
||||
export interface RankingSnapshot {
|
||||
date: string
|
||||
rank: number
|
||||
winning_percentage: number
|
||||
}
|
||||
|
||||
export interface RankingStatistics {
|
||||
total_ranked_rides: number
|
||||
total_comparisons: number
|
||||
last_calculation_time: string
|
||||
calculation_duration: number
|
||||
top_rated_ride?: {
|
||||
id: number
|
||||
name: string
|
||||
slug: string
|
||||
park: string
|
||||
rank: number
|
||||
winning_percentage: number
|
||||
average_rating: number
|
||||
}
|
||||
most_compared_ride?: {
|
||||
id: number
|
||||
name: string
|
||||
slug: string
|
||||
park: string
|
||||
comparison_count: number
|
||||
}
|
||||
biggest_rank_change?: {
|
||||
ride: {
|
||||
id: number
|
||||
name: string
|
||||
slug: string
|
||||
}
|
||||
current_rank: number
|
||||
previous_rank: number
|
||||
change: number
|
||||
}
|
||||
}
|
||||
|
||||
@@ -8,13 +8,16 @@
|
||||
<div class="text-center max-w-4xl mx-auto">
|
||||
<h1 class="text-4xl md:text-6xl font-bold mb-6">Discover Your Next Thrill</h1>
|
||||
<p class="text-xl md:text-2xl mb-8 text-purple-100">
|
||||
Search through thousands of amusement rides and parks in an expansive community database
|
||||
Search through thousands of amusement rides and parks in an expansive
|
||||
community database
|
||||
</p>
|
||||
|
||||
<!-- Search Bar -->
|
||||
<div class="max-w-2xl mx-auto mb-16">
|
||||
<div class="relative">
|
||||
<div class="absolute inset-y-0 left-0 pl-4 flex items-center pointer-events-none">
|
||||
<div
|
||||
class="absolute inset-y-0 left-0 pl-4 flex items-center pointer-events-none"
|
||||
>
|
||||
<svg
|
||||
class="h-6 w-6 text-gray-400"
|
||||
fill="none"
|
||||
@@ -77,8 +80,14 @@
|
||||
<div class="container mx-auto px-4 sm:px-6 lg:px-8">
|
||||
<div class="text-center mb-12">
|
||||
<div class="flex items-center justify-center mb-4">
|
||||
<svg class="h-6 w-6 text-orange-500 mr-2" fill="currentColor" viewBox="0 0 24 24">
|
||||
<path d="M12 2L13.09 8.26L22 9L13.09 9.74L12 16L10.91 9.74L2 9L10.91 8.26L12 2Z" />
|
||||
<svg
|
||||
class="h-6 w-6 text-orange-500 mr-2"
|
||||
fill="currentColor"
|
||||
viewBox="0 0 24 24"
|
||||
>
|
||||
<path
|
||||
d="M12 2L13.09 8.26L22 9L13.09 9.74L12 16L10.91 9.74L2 9L10.91 8.26L12 2Z"
|
||||
/>
|
||||
</svg>
|
||||
<h2 class="text-3xl md:text-4xl font-bold text-gray-900 dark:text-white">
|
||||
What's Trending
|
||||
@@ -109,19 +118,32 @@
|
||||
</div>
|
||||
|
||||
<!-- Trending Content -->
|
||||
<div class="grid grid-cols-1 md:grid-cols-3 gap-6 mb-8">
|
||||
<!-- Loading State -->
|
||||
<div v-if="isLoadingTrending" class="grid grid-cols-1 md:grid-cols-3 gap-6 mb-8">
|
||||
<div
|
||||
v-for="item in getTrendingContent()"
|
||||
:key="item.id"
|
||||
class="bg-white dark:bg-gray-700 rounded-lg shadow-lg overflow-hidden hover:shadow-xl transition-shadow cursor-pointer group"
|
||||
@click="viewTrendingItem(item)"
|
||||
v-for="i in 3"
|
||||
:key="'trending-skeleton-' + i"
|
||||
class="bg-white dark:bg-gray-700 rounded-lg shadow-lg overflow-hidden animate-pulse"
|
||||
>
|
||||
<!-- Image placeholder -->
|
||||
<div class="h-48 bg-gray-200 dark:bg-gray-600"></div>
|
||||
<div class="p-6">
|
||||
<div class="h-6 bg-gray-200 dark:bg-gray-600 rounded mb-2"></div>
|
||||
<div class="h-4 bg-gray-200 dark:bg-gray-600 rounded mb-3 w-2/3"></div>
|
||||
<div class="flex justify-between">
|
||||
<div class="h-4 bg-gray-200 dark:bg-gray-600 rounded w-1/4"></div>
|
||||
<div class="h-4 bg-gray-200 dark:bg-gray-600 rounded w-1/4"></div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Error State -->
|
||||
<div v-else-if="trendingError" class="mb-8">
|
||||
<div
|
||||
class="h-48 bg-gradient-to-br from-gray-200 to-gray-300 dark:from-gray-600 dark:to-gray-700 flex items-center justify-center relative"
|
||||
class="bg-red-50 dark:bg-red-900/20 border border-red-200 dark:border-red-800 rounded-lg p-6 text-center"
|
||||
>
|
||||
<svg
|
||||
class="h-12 w-12 text-gray-400 dark:text-gray-500"
|
||||
class="h-12 w-12 text-red-400 mx-auto mb-4"
|
||||
fill="none"
|
||||
stroke="currentColor"
|
||||
viewBox="0 0 24 24"
|
||||
@@ -129,22 +151,66 @@
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="1"
|
||||
stroke-width="2"
|
||||
d="M12 9v2m0 4h.01m-6.938 4h13.856c1.54 0 2.502-1.667 1.732-2.5L13.732 4c-.77-.833-1.732-.833-2.5 0L4.314 15.5c-.77.833.192 2.5 1.732 2.5z"
|
||||
/>
|
||||
</svg>
|
||||
<h3 class="text-lg font-medium text-red-800 dark:text-red-200 mb-2">
|
||||
Failed to Load Trending Content
|
||||
</h3>
|
||||
<p class="text-red-600 dark:text-red-400 mb-4">{{ trendingError }}</p>
|
||||
<button
|
||||
@click="fetchTrendingContent()"
|
||||
class="px-4 py-2 bg-red-600 hover:bg-red-700 text-white rounded-lg font-medium transition-colors"
|
||||
>
|
||||
Try Again
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Content State -->
|
||||
<div v-else class="grid grid-cols-1 md:grid-cols-3 gap-6 mb-8">
|
||||
<article
|
||||
v-for="item in getTrendingContent()"
|
||||
:key="item.id"
|
||||
class="bg-white dark:bg-gray-800 rounded-xl shadow-sm hover:shadow-lg border border-gray-200 dark:border-gray-700 overflow-hidden transition-all duration-200 cursor-pointer group hover:scale-[1.01] hover:-translate-y-0.5"
|
||||
@click="viewTrendingItem(item)"
|
||||
>
|
||||
<!-- Image placeholder -->
|
||||
<div
|
||||
class="h-48 bg-gradient-to-br from-gray-100 to-gray-200 dark:from-gray-700 dark:to-gray-800 flex items-center justify-center relative"
|
||||
>
|
||||
<svg
|
||||
class="h-10 w-10 text-gray-400 dark:text-gray-500"
|
||||
fill="none"
|
||||
stroke="currentColor"
|
||||
viewBox="0 0 24 24"
|
||||
>
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="1.5"
|
||||
d="M4 16l4.586-4.586a2 2 0 012.828 0L16 16m-2-2l1.586-1.586a2 2 0 012.828 0L20 14m-6-6h.01M6 20h12a2 2 0 002-2V6a2 2 0 00-2-2H6a2 2 0 00-2 2v12a2 2 0 002 2z"
|
||||
/>
|
||||
</svg>
|
||||
<!-- Trending badge -->
|
||||
<div class="absolute top-3 left-3">
|
||||
<span class="bg-orange-500 text-white text-xs font-medium px-2 py-1 rounded-full">
|
||||
<div class="absolute top-4 left-4">
|
||||
<span
|
||||
class="inline-flex items-center px-2.5 py-1 rounded-full text-xs font-semibold bg-orange-500 text-white shadow-sm"
|
||||
>
|
||||
#{{ item.rank }}
|
||||
</span>
|
||||
</div>
|
||||
<!-- Rating badge -->
|
||||
<div class="absolute top-3 right-3">
|
||||
<div class="absolute top-4 right-4">
|
||||
<span
|
||||
class="bg-gray-900 bg-opacity-75 text-white text-xs font-medium px-2 py-1 rounded-full flex items-center"
|
||||
class="inline-flex items-center px-2 py-1 rounded-full text-xs font-medium bg-black/75 text-white backdrop-blur-sm"
|
||||
>
|
||||
<svg
|
||||
class="h-3 w-3 text-yellow-400 mr-1 flex-shrink-0"
|
||||
fill="currentColor"
|
||||
viewBox="0 0 20 20"
|
||||
>
|
||||
<svg class="h-3 w-3 text-yellow-400 mr-1" fill="currentColor" viewBox="0 0 20 20">
|
||||
<path
|
||||
d="M9.049 2.927c.3-.921 1.603-.921 1.902 0l1.07 3.292a1 1 0 00.95.69h3.462c.969 0 1.371 1.24.588 1.81l-2.8 2.034a1 1 0 00-.364 1.118l1.07 3.292c.3.921-.755 1.688-1.54 1.118l-2.8-2.034a1 1 0 00-1.175 0l-2.8 2.034c-.784.57-1.838-.197-1.539-1.118l1.07-3.292a1 1 0 00-.364-1.118L2.98 8.72c-.783-.57-.38-1.81.588-1.81h3.461a1 1 0 00.951-.69l1.07-3.292z"
|
||||
/>
|
||||
@@ -156,19 +222,28 @@
|
||||
|
||||
<div class="p-6">
|
||||
<h3
|
||||
class="text-lg font-semibold text-gray-900 dark:text-white mb-2 group-hover:text-blue-600 dark:group-hover:text-blue-400 transition-colors"
|
||||
class="text-xl font-semibold text-gray-900 dark:text-white mb-3 group-hover:text-blue-600 dark:group-hover:text-blue-400 transition-colors leading-tight"
|
||||
>
|
||||
{{ item.name }}
|
||||
</h3>
|
||||
<p class="text-sm text-gray-600 dark:text-gray-400 mb-3">
|
||||
<span class="font-medium">{{ item.location }}</span>
|
||||
<span v-if="item.category"> • {{ item.category }}</span>
|
||||
<p class="text-sm text-gray-600 dark:text-gray-400 mb-4 leading-relaxed">
|
||||
<span class="font-medium text-gray-700 dark:text-gray-300">{{
|
||||
item.location
|
||||
}}</span>
|
||||
<span v-if="item.category" class="text-gray-500 dark:text-gray-500">
|
||||
• {{ item.category }}</span
|
||||
>
|
||||
</p>
|
||||
<div class="flex items-center justify-between">
|
||||
<span
|
||||
class="text-green-600 dark:text-green-400 text-sm font-medium flex items-center"
|
||||
class="inline-flex items-center text-sm font-medium text-green-600 dark:text-green-400"
|
||||
>
|
||||
<svg
|
||||
class="h-4 w-4 mr-1.5 flex-shrink-0"
|
||||
fill="none"
|
||||
stroke="currentColor"
|
||||
viewBox="0 0 24 24"
|
||||
>
|
||||
<svg class="h-4 w-4 mr-1" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
@@ -179,13 +254,13 @@
|
||||
+{{ item.views_change }}%
|
||||
</span>
|
||||
<span
|
||||
class="text-blue-600 dark:text-blue-400 text-sm font-medium group-hover:underline"
|
||||
class="text-sm font-medium text-gray-500 dark:text-gray-400 group-hover:text-blue-600 dark:group-hover:text-blue-400 transition-colors"
|
||||
>
|
||||
{{ item.views }} views
|
||||
{{ item.views.toLocaleString() }} views
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</article>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
@@ -195,10 +270,18 @@
|
||||
<div class="container mx-auto px-4 sm:px-6 lg:px-8">
|
||||
<div class="text-center mb-12">
|
||||
<div class="flex items-center justify-center mb-4">
|
||||
<svg class="h-6 w-6 text-blue-500 mr-2" fill="currentColor" viewBox="0 0 24 24">
|
||||
<path d="M12 2L15.09 8.26L22 9L15.09 9.74L12 16L8.91 9.74L2 9L8.91 8.26L12 2Z" />
|
||||
<svg
|
||||
class="h-6 w-6 text-blue-500 mr-2"
|
||||
fill="currentColor"
|
||||
viewBox="0 0 24 24"
|
||||
>
|
||||
<path
|
||||
d="M12 2L15.09 8.26L22 9L15.09 9.74L12 16L8.91 9.74L2 9L8.91 8.26L12 2Z"
|
||||
/>
|
||||
</svg>
|
||||
<h2 class="text-3xl md:text-4xl font-bold text-gray-900 dark:text-white">What's New</h2>
|
||||
<h2 class="text-3xl md:text-4xl font-bold text-gray-900 dark:text-white">
|
||||
What's New
|
||||
</h2>
|
||||
</div>
|
||||
<p class="text-xl text-gray-600 dark:text-gray-400 max-w-2xl mx-auto">
|
||||
Stay up to date with the latest additions and upcoming attractions
|
||||
@@ -225,19 +308,32 @@
|
||||
</div>
|
||||
|
||||
<!-- New Content -->
|
||||
<div class="grid grid-cols-1 md:grid-cols-2 gap-8 mb-8">
|
||||
<!-- Loading State -->
|
||||
<div v-if="isLoadingNew" class="grid grid-cols-1 md:grid-cols-2 gap-8 mb-8">
|
||||
<div
|
||||
v-for="item in getNewContent()"
|
||||
:key="item.id"
|
||||
class="bg-white dark:bg-gray-800 rounded-lg shadow-lg overflow-hidden hover:shadow-xl transition-shadow cursor-pointer group"
|
||||
@click="viewNewItem(item)"
|
||||
v-for="i in 2"
|
||||
:key="'new-skeleton-' + i"
|
||||
class="bg-white dark:bg-gray-800 rounded-lg shadow-lg overflow-hidden animate-pulse"
|
||||
>
|
||||
<!-- Image placeholder -->
|
||||
<div class="h-48 bg-gray-200 dark:bg-gray-600"></div>
|
||||
<div class="p-6">
|
||||
<div class="h-6 bg-gray-200 dark:bg-gray-600 rounded mb-2"></div>
|
||||
<div class="h-4 bg-gray-200 dark:bg-gray-600 rounded mb-3 w-2/3"></div>
|
||||
<div class="flex justify-between">
|
||||
<div class="h-4 bg-gray-200 dark:bg-gray-600 rounded w-1/3"></div>
|
||||
<div class="h-4 bg-gray-200 dark:bg-gray-600 rounded w-1/4"></div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Error State -->
|
||||
<div v-else-if="newError" class="mb-8">
|
||||
<div
|
||||
class="h-48 bg-gradient-to-br from-gray-200 to-gray-300 dark:from-gray-600 dark:to-gray-700 flex items-center justify-center relative"
|
||||
class="bg-red-50 dark:bg-red-900/20 border border-red-200 dark:border-red-800 rounded-lg p-6 text-center"
|
||||
>
|
||||
<svg
|
||||
class="h-12 w-12 text-gray-400 dark:text-gray-500"
|
||||
class="h-12 w-12 text-red-400 mx-auto mb-4"
|
||||
fill="none"
|
||||
stroke="currentColor"
|
||||
viewBox="0 0 24 24"
|
||||
@@ -245,13 +341,53 @@
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="1"
|
||||
stroke-width="2"
|
||||
d="M12 9v2m0 4h.01m-6.938 4h13.856c1.54 0 2.502-1.667 1.732-2.5L13.732 4c-.77-.833-1.732-.833-2.5 0L4.314 15.5c-.77.833.192 2.5 1.732 2.5z"
|
||||
/>
|
||||
</svg>
|
||||
<h3 class="text-lg font-medium text-red-800 dark:text-red-200 mb-2">
|
||||
Failed to Load New Content
|
||||
</h3>
|
||||
<p class="text-red-600 dark:text-red-400 mb-4">{{ newError }}</p>
|
||||
<button
|
||||
@click="fetchNewContent()"
|
||||
class="px-4 py-2 bg-red-600 hover:bg-red-700 text-white rounded-lg font-medium transition-colors"
|
||||
>
|
||||
Try Again
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Content State -->
|
||||
<div v-else class="grid grid-cols-1 md:grid-cols-2 gap-8 mb-8">
|
||||
<article
|
||||
v-for="item in getNewContent()"
|
||||
:key="item.id"
|
||||
class="bg-white dark:bg-gray-800 rounded-xl shadow-sm hover:shadow-lg border border-gray-200 dark:border-gray-700 overflow-hidden transition-all duration-200 cursor-pointer group hover:scale-[1.01] hover:-translate-y-0.5"
|
||||
@click="viewNewItem(item)"
|
||||
>
|
||||
<!-- Image placeholder -->
|
||||
<div
|
||||
class="h-48 bg-gradient-to-br from-gray-100 to-gray-200 dark:from-gray-700 dark:to-gray-800 flex items-center justify-center relative"
|
||||
>
|
||||
<svg
|
||||
class="h-10 w-10 text-gray-400 dark:text-gray-500"
|
||||
fill="none"
|
||||
stroke="currentColor"
|
||||
viewBox="0 0 24 24"
|
||||
>
|
||||
<path
|
||||
stroke-linecap="round"
|
||||
stroke-linejoin="round"
|
||||
stroke-width="1.5"
|
||||
d="M4 16l4.586-4.586a2 2 0 012.828 0L16 16m-2-2l1.586-1.586a2 2 0 012.828 0L20 14m-6-6h.01M6 20h12a2 2 0 002-2V6a2 2 0 00-2-2H6a2 2 0 00-2 2v12a2 2 0 002 2z"
|
||||
/>
|
||||
</svg>
|
||||
<!-- New badge -->
|
||||
<div class="absolute top-3 left-3">
|
||||
<span class="bg-green-500 text-white text-xs font-medium px-2 py-1 rounded-full">
|
||||
<div class="absolute top-4 left-4">
|
||||
<span
|
||||
class="inline-flex items-center px-2.5 py-1 rounded-full text-xs font-semibold bg-green-500 text-white shadow-sm"
|
||||
>
|
||||
New
|
||||
</span>
|
||||
</div>
|
||||
@@ -259,26 +395,30 @@
|
||||
|
||||
<div class="p-6">
|
||||
<h3
|
||||
class="text-lg font-semibold text-gray-900 dark:text-white mb-2 group-hover:text-blue-600 dark:group-hover:text-blue-400 transition-colors"
|
||||
class="text-xl font-semibold text-gray-900 dark:text-white mb-3 group-hover:text-blue-600 dark:group-hover:text-blue-400 transition-colors leading-tight"
|
||||
>
|
||||
{{ item.name }}
|
||||
</h3>
|
||||
<p class="text-sm text-gray-600 dark:text-gray-400 mb-3">
|
||||
<span class="font-medium">{{ item.location }}</span>
|
||||
<span v-if="item.category"> • {{ item.category }}</span>
|
||||
<p class="text-sm text-gray-600 dark:text-gray-400 mb-4 leading-relaxed">
|
||||
<span class="font-medium text-gray-700 dark:text-gray-300">{{
|
||||
item.location
|
||||
}}</span>
|
||||
<span v-if="item.category" class="text-gray-500 dark:text-gray-500">
|
||||
• {{ item.category }}</span
|
||||
>
|
||||
</p>
|
||||
<div class="flex items-center justify-between">
|
||||
<span class="text-gray-600 dark:text-gray-400 text-sm">
|
||||
<span class="text-sm text-gray-600 dark:text-gray-400 font-medium">
|
||||
Added {{ item.date_added }}
|
||||
</span>
|
||||
<span
|
||||
class="text-blue-600 dark:text-blue-400 text-sm font-medium group-hover:underline"
|
||||
class="text-sm font-medium text-blue-600 dark:text-blue-400 group-hover:text-blue-700 dark:group-hover:text-blue-300 transition-colors"
|
||||
>
|
||||
View All New Additions →
|
||||
Learn More →
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</article>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
@@ -290,8 +430,8 @@
|
||||
Join the ThrillWiki Community
|
||||
</h2>
|
||||
<p class="text-xl text-gray-600 dark:text-gray-400 max-w-3xl mx-auto mb-8">
|
||||
Share your experiences, contribute to our database, and connect with fellow theme park
|
||||
enthusiasts.
|
||||
Share your experiences, contribute to our database, and connect with fellow
|
||||
theme park enthusiasts.
|
||||
</p>
|
||||
<button
|
||||
class="px-8 py-4 bg-gray-900 hover:bg-gray-800 dark:bg-gray-700 dark:hover:bg-gray-600 text-white font-medium rounded-lg transition-colors"
|
||||
@@ -304,13 +444,21 @@
|
||||
</template>
|
||||
|
||||
<script setup lang="ts">
|
||||
import { ref, onMounted } from 'vue'
|
||||
import { useRouter } from 'vue-router'
|
||||
import { ref, onMounted } from "vue";
|
||||
import { useRouter } from "vue-router";
|
||||
import { trendingApi } from "@/services/api";
|
||||
import type { TrendingItem, NewContentItem } from "@/types";
|
||||
|
||||
const router = useRouter()
|
||||
const heroSearchQuery = ref('')
|
||||
const activeTrendingTab = ref('rides')
|
||||
const activeNewTab = ref('recently-added')
|
||||
const router = useRouter();
|
||||
const heroSearchQuery = ref("");
|
||||
const activeTrendingTab = ref("rides");
|
||||
const activeNewTab = ref("recently-added");
|
||||
|
||||
// Loading states
|
||||
const isLoadingTrending = ref(true);
|
||||
const isLoadingNew = ref(true);
|
||||
const trendingError = ref<string | null>(null);
|
||||
const newError = ref<string | null>(null);
|
||||
|
||||
// Sample data matching the design
|
||||
const stats = ref({
|
||||
@@ -318,236 +466,282 @@ const stats = ref({
|
||||
rides: 10,
|
||||
reviews: 20,
|
||||
photos: 1,
|
||||
})
|
||||
});
|
||||
|
||||
const trendingTabs = [
|
||||
{ id: 'rides', label: 'Trending Rides' },
|
||||
{ id: 'parks', label: 'Trending Parks' },
|
||||
{ id: 'reviews', label: 'Latest Reviews' },
|
||||
]
|
||||
{ id: "rides", label: "Trending Rides" },
|
||||
{ id: "parks", label: "Trending Parks" },
|
||||
{ id: "reviews", label: "Latest Reviews" },
|
||||
];
|
||||
|
||||
const newTabs = [
|
||||
{ id: 'recently-added', label: 'Recently Added' },
|
||||
{ id: 'newly-opened', label: 'Newly Opened' },
|
||||
{ id: 'upcoming', label: 'Upcoming' },
|
||||
]
|
||||
{ id: "recently-added", label: "Recently Added" },
|
||||
{ id: "newly-opened", label: "Newly Opened" },
|
||||
{ id: "upcoming", label: "Upcoming" },
|
||||
];
|
||||
|
||||
const trendingRides = ref([
|
||||
{
|
||||
id: 1,
|
||||
name: 'Steel Vengeance',
|
||||
location: 'Cedar Point',
|
||||
category: 'Hybrid Coaster',
|
||||
rating: 4.9,
|
||||
rank: 1,
|
||||
views: 4820,
|
||||
views_change: 23,
|
||||
slug: 'steel-vengeance',
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
name: 'Kingda Ka',
|
||||
location: 'Six Flags Great Adventure',
|
||||
category: 'Launched Coaster',
|
||||
rating: 4.8,
|
||||
rank: 2,
|
||||
views: 3647,
|
||||
views_change: 18,
|
||||
slug: 'kingda-ka',
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
name: 'Pirates of the Caribbean',
|
||||
location: 'Disneyland',
|
||||
category: 'Dark Ride',
|
||||
rating: 4.7,
|
||||
rank: 3,
|
||||
views: 3156,
|
||||
views_change: 12,
|
||||
slug: 'pirates-of-the-caribbean',
|
||||
},
|
||||
])
|
||||
|
||||
const trendingParks = ref([
|
||||
{
|
||||
id: 1,
|
||||
name: 'Cedar Point',
|
||||
location: 'Sandusky, Ohio',
|
||||
category: 'Amusement Park',
|
||||
rating: 4.8,
|
||||
rank: 1,
|
||||
views: 8920,
|
||||
views_change: 15,
|
||||
slug: 'cedar-point',
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
name: 'Magic Kingdom',
|
||||
location: 'Orlando, Florida',
|
||||
category: 'Theme Park',
|
||||
rating: 4.9,
|
||||
rank: 2,
|
||||
views: 7654,
|
||||
views_change: 12,
|
||||
slug: 'magic-kingdom',
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
name: 'Europa-Park',
|
||||
location: 'Rust, Germany',
|
||||
category: 'Theme Park',
|
||||
rating: 4.7,
|
||||
rank: 3,
|
||||
views: 5432,
|
||||
views_change: 22,
|
||||
slug: 'europa-park',
|
||||
},
|
||||
])
|
||||
|
||||
const latestReviews = ref([
|
||||
{
|
||||
id: 1,
|
||||
name: 'Steel Vengeance Review',
|
||||
location: 'Cedar Point',
|
||||
category: 'Roller Coaster',
|
||||
rating: 5.0,
|
||||
rank: 1,
|
||||
views: 1234,
|
||||
views_change: 45,
|
||||
slug: 'steel-vengeance-review',
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
name: 'Kingda Ka Experience',
|
||||
location: 'Six Flags Great Adventure',
|
||||
category: 'Launch Coaster',
|
||||
rating: 4.8,
|
||||
rank: 2,
|
||||
views: 987,
|
||||
views_change: 32,
|
||||
slug: 'kingda-ka-review',
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
name: 'Pirates Ride Review',
|
||||
location: 'Disneyland',
|
||||
category: 'Dark Ride',
|
||||
rating: 4.6,
|
||||
rank: 3,
|
||||
views: 765,
|
||||
views_change: 28,
|
||||
slug: 'pirates-review',
|
||||
},
|
||||
])
|
||||
|
||||
const recentlyAdded = ref([
|
||||
{
|
||||
id: 1,
|
||||
name: 'Guardians of the Galaxy: Cosmic Rewind',
|
||||
location: 'EPCOT',
|
||||
category: 'Indoor Coaster',
|
||||
date_added: '2024-01-20',
|
||||
slug: 'guardians-cosmic-rewind',
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
name: 'VelociCoaster',
|
||||
location: "Universal's Islands of Adventure",
|
||||
category: 'Launch Coaster',
|
||||
date_added: '2024-01-18',
|
||||
slug: 'velocicoaster',
|
||||
},
|
||||
])
|
||||
|
||||
const newlyOpened = ref([
|
||||
{
|
||||
id: 1,
|
||||
name: 'TRON Lightcycle / Run',
|
||||
location: 'Magic Kingdom',
|
||||
category: 'Launch Coaster',
|
||||
date_added: '2023-04-04',
|
||||
slug: 'tron-lightcycle-run',
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
name: "Hagrid's Magical Creatures Motorbike Adventure",
|
||||
location: "Universal's Islands of Adventure",
|
||||
category: 'Story Coaster',
|
||||
date_added: '2019-06-13',
|
||||
slug: 'hagrids-motorbike-adventure',
|
||||
},
|
||||
])
|
||||
|
||||
const upcoming = ref([
|
||||
{
|
||||
id: 1,
|
||||
name: 'Epic Universe',
|
||||
location: 'Universal Orlando',
|
||||
category: 'Theme Park',
|
||||
date_added: 'Opening 2025',
|
||||
slug: 'epic-universe',
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
name: 'New Fantasyland Expansion',
|
||||
location: 'Magic Kingdom',
|
||||
category: 'Land Expansion',
|
||||
date_added: 'Opening 2026',
|
||||
slug: 'fantasyland-expansion',
|
||||
},
|
||||
])
|
||||
// Data from API
|
||||
const trendingRides = ref<TrendingItem[]>([]);
|
||||
const trendingParks = ref<TrendingItem[]>([]);
|
||||
const latestReviews = ref<TrendingItem[]>([]);
|
||||
const recentlyAdded = ref<NewContentItem[]>([]);
|
||||
const newlyOpened = ref<NewContentItem[]>([]);
|
||||
const upcoming = ref<NewContentItem[]>([]);
|
||||
|
||||
// Methods
|
||||
const handleHeroSearch = () => {
|
||||
if (heroSearchQuery.value.trim()) {
|
||||
router.push({
|
||||
name: 'search-results',
|
||||
name: "search-results",
|
||||
query: { q: heroSearchQuery.value.trim() },
|
||||
})
|
||||
}
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
const getTrendingContent = () => {
|
||||
switch (activeTrendingTab.value) {
|
||||
case 'rides':
|
||||
return trendingRides.value
|
||||
case 'parks':
|
||||
return trendingParks.value
|
||||
case 'reviews':
|
||||
return latestReviews.value
|
||||
case "rides":
|
||||
return trendingRides.value;
|
||||
case "parks":
|
||||
return trendingParks.value;
|
||||
case "reviews":
|
||||
return latestReviews.value;
|
||||
default:
|
||||
return trendingRides.value
|
||||
}
|
||||
return trendingRides.value;
|
||||
}
|
||||
};
|
||||
|
||||
const getNewContent = () => {
|
||||
switch (activeNewTab.value) {
|
||||
case 'recently-added':
|
||||
return recentlyAdded.value
|
||||
case 'newly-opened':
|
||||
return newlyOpened.value
|
||||
case 'upcoming':
|
||||
return upcoming.value
|
||||
case "recently-added":
|
||||
return recentlyAdded.value;
|
||||
case "newly-opened":
|
||||
return newlyOpened.value;
|
||||
case "upcoming":
|
||||
return upcoming.value;
|
||||
default:
|
||||
return recentlyAdded.value
|
||||
}
|
||||
return recentlyAdded.value;
|
||||
}
|
||||
};
|
||||
|
||||
const viewTrendingItem = (item: any) => {
|
||||
if (activeTrendingTab.value === 'parks') {
|
||||
router.push({ name: 'park-detail', params: { slug: item.slug } })
|
||||
} else if (activeTrendingTab.value === 'rides') {
|
||||
router.push({ name: 'global-ride-detail', params: { rideSlug: item.slug } })
|
||||
}
|
||||
if (activeTrendingTab.value === "parks") {
|
||||
router.push({ name: "park-detail", params: { slug: item.slug } });
|
||||
} else if (activeTrendingTab.value === "rides") {
|
||||
router.push({ name: "global-ride-detail", params: { rideSlug: item.slug } });
|
||||
}
|
||||
};
|
||||
|
||||
const viewNewItem = (item: any) => {
|
||||
router.push({ name: 'park-detail', params: { slug: item.slug } })
|
||||
}
|
||||
router.push({ name: "park-detail", params: { slug: item.slug } });
|
||||
};
|
||||
|
||||
// In a real app, you would fetch this data from your Django API
|
||||
onMounted(() => {
|
||||
// TODO: Fetch actual data from Django backend
|
||||
console.log('Home view mounted')
|
||||
})
|
||||
// API calls
|
||||
const fetchTrendingContent = async () => {
|
||||
try {
|
||||
isLoadingTrending.value = true;
|
||||
trendingError.value = null;
|
||||
|
||||
const response = await trendingApi.getTrendingContent();
|
||||
trendingRides.value = response.trending_rides;
|
||||
trendingParks.value = response.trending_parks;
|
||||
latestReviews.value = response.latest_reviews;
|
||||
} catch (error) {
|
||||
console.error("Error fetching trending content:", error);
|
||||
trendingError.value = "Failed to load trending content";
|
||||
|
||||
// Fallback to sample data on error
|
||||
trendingRides.value = [
|
||||
{
|
||||
id: 1,
|
||||
name: "Steel Vengeance",
|
||||
location: "Cedar Point",
|
||||
category: "Hybrid Coaster",
|
||||
rating: 4.9,
|
||||
rank: 1,
|
||||
views: 4820,
|
||||
views_change: 23,
|
||||
slug: "steel-vengeance",
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
name: "Kingda Ka",
|
||||
location: "Six Flags Great Adventure",
|
||||
category: "Launched Coaster",
|
||||
rating: 4.8,
|
||||
rank: 2,
|
||||
views: 3647,
|
||||
views_change: 18,
|
||||
slug: "kingda-ka",
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
name: "Pirates of the Caribbean",
|
||||
location: "Disneyland",
|
||||
category: "Dark Ride",
|
||||
rating: 4.7,
|
||||
rank: 3,
|
||||
views: 3156,
|
||||
views_change: 12,
|
||||
slug: "pirates-of-the-caribbean",
|
||||
},
|
||||
];
|
||||
|
||||
trendingParks.value = [
|
||||
{
|
||||
id: 1,
|
||||
name: "Cedar Point",
|
||||
location: "Sandusky, Ohio",
|
||||
category: "Amusement Park",
|
||||
rating: 4.8,
|
||||
rank: 1,
|
||||
views: 8920,
|
||||
views_change: 15,
|
||||
slug: "cedar-point",
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
name: "Magic Kingdom",
|
||||
location: "Orlando, Florida",
|
||||
category: "Theme Park",
|
||||
rating: 4.9,
|
||||
rank: 2,
|
||||
views: 7654,
|
||||
views_change: 12,
|
||||
slug: "magic-kingdom",
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
name: "Europa-Park",
|
||||
location: "Rust, Germany",
|
||||
category: "Theme Park",
|
||||
rating: 4.7,
|
||||
rank: 3,
|
||||
views: 5432,
|
||||
views_change: 22,
|
||||
slug: "europa-park",
|
||||
},
|
||||
];
|
||||
|
||||
latestReviews.value = [
|
||||
{
|
||||
id: 1,
|
||||
name: "Steel Vengeance Review",
|
||||
location: "Cedar Point",
|
||||
category: "Roller Coaster",
|
||||
rating: 5.0,
|
||||
rank: 1,
|
||||
views: 1234,
|
||||
views_change: 45,
|
||||
slug: "steel-vengeance-review",
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
name: "Kingda Ka Experience",
|
||||
location: "Six Flags Great Adventure",
|
||||
category: "Launch Coaster",
|
||||
rating: 4.8,
|
||||
rank: 2,
|
||||
views: 987,
|
||||
views_change: 32,
|
||||
slug: "kingda-ka-review",
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
name: "Pirates Ride Review",
|
||||
location: "Disneyland",
|
||||
category: "Dark Ride",
|
||||
rating: 4.6,
|
||||
rank: 3,
|
||||
views: 765,
|
||||
views_change: 28,
|
||||
slug: "pirates-review",
|
||||
},
|
||||
];
|
||||
} finally {
|
||||
isLoadingTrending.value = false;
|
||||
}
|
||||
};
|
||||
|
||||
const fetchNewContent = async () => {
|
||||
try {
|
||||
isLoadingNew.value = true;
|
||||
newError.value = null;
|
||||
|
||||
const response = await trendingApi.getNewContent();
|
||||
recentlyAdded.value = response.recently_added;
|
||||
newlyOpened.value = response.newly_opened;
|
||||
upcoming.value = response.upcoming;
|
||||
} catch (error) {
|
||||
console.error("Error fetching new content:", error);
|
||||
newError.value = "Failed to load new content";
|
||||
|
||||
// Fallback to sample data on error
|
||||
recentlyAdded.value = [
|
||||
{
|
||||
id: 1,
|
||||
name: "Guardians of the Galaxy: Cosmic Rewind",
|
||||
location: "EPCOT",
|
||||
category: "Indoor Coaster",
|
||||
date_added: "2024-01-20",
|
||||
slug: "guardians-cosmic-rewind",
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
name: "VelociCoaster",
|
||||
location: "Universal's Islands of Adventure",
|
||||
category: "Launch Coaster",
|
||||
date_added: "2024-01-18",
|
||||
slug: "velocicoaster",
|
||||
},
|
||||
];
|
||||
|
||||
newlyOpened.value = [
|
||||
{
|
||||
id: 1,
|
||||
name: "TRON Lightcycle / Run",
|
||||
location: "Magic Kingdom",
|
||||
category: "Launch Coaster",
|
||||
date_added: "2023-04-04",
|
||||
slug: "tron-lightcycle-run",
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
name: "Hagrid's Magical Creatures Motorbike Adventure",
|
||||
location: "Universal's Islands of Adventure",
|
||||
category: "Story Coaster",
|
||||
date_added: "2019-06-13",
|
||||
slug: "hagrids-motorbike-adventure",
|
||||
},
|
||||
];
|
||||
|
||||
upcoming.value = [
|
||||
{
|
||||
id: 1,
|
||||
name: "Epic Universe",
|
||||
location: "Universal Orlando",
|
||||
category: "Theme Park",
|
||||
date_added: "Opening 2025",
|
||||
slug: "epic-universe",
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
name: "New Fantasyland Expansion",
|
||||
location: "Magic Kingdom",
|
||||
category: "Land Expansion",
|
||||
date_added: "Opening 2026",
|
||||
slug: "fantasyland-expansion",
|
||||
},
|
||||
];
|
||||
} finally {
|
||||
isLoadingNew.value = false;
|
||||
}
|
||||
};
|
||||
|
||||
onMounted(async () => {
|
||||
console.log("Home view mounted - fetching trending data from API");
|
||||
|
||||
// Fetch both trending and new content in parallel
|
||||
await Promise.all([fetchTrendingContent(), fetchNewContent()]);
|
||||
});
|
||||
</script>
|
||||
|
||||
@@ -37,6 +37,10 @@ flake8==7.1.1
|
||||
pytest==8.3.4
|
||||
pytest-django==4.10.0
|
||||
|
||||
# Caching and Redis
|
||||
redis==5.2.1
|
||||
django-redis==5.4.0
|
||||
|
||||
# WebSocket Support
|
||||
channels==4.2.0
|
||||
channels-redis==4.2.1
|
||||
|
||||
202
shared/docs/development/start-servers-signal-handling-fix.md
Normal file
202
shared/docs/development/start-servers-signal-handling-fix.md
Normal file
@@ -0,0 +1,202 @@
|
||||
# Signal Handling Fix for start-servers.sh
|
||||
|
||||
## Problem Description
|
||||
|
||||
The [`start-servers.sh`](../../scripts/start-servers.sh) script was not properly responding to Ctrl+C (SIGINT) signals, causing the script to continue running even after the user attempted to stop it. This left background server processes running and made it difficult to gracefully shut down the development environment.
|
||||
|
||||
## Root Causes Identified
|
||||
|
||||
1. **Late Signal Trap Registration**: Signal traps were only registered in the `wait_for_servers()` function after all servers had started, leaving a window during startup where Ctrl+C wouldn't work.
|
||||
|
||||
2. **No Signal Handling During Startup**: The entire initialization and server startup process had no signal traps, making the script unresponsive to interruption during these phases.
|
||||
|
||||
3. **Background Process Signal Issues**: Background processes weren't properly configured to receive termination signals from the parent script.
|
||||
|
||||
4. **No Recursive Signal Prevention**: Multiple signal calls could interfere with graceful shutdown.
|
||||
|
||||
5. **Inefficient Signal Detection**: The monitoring loop used longer sleep intervals, making signal response less responsive.
|
||||
|
||||
## Changes Made
|
||||
|
||||
### 1. Early Signal Trap Registration
|
||||
|
||||
**Location**: `main()` function, line ~463
|
||||
|
||||
**Before**:
|
||||
```bash
|
||||
main() {
|
||||
print_status "ThrillWiki Server Start Script Starting..."
|
||||
print_status "This script works whether servers are currently running or not."
|
||||
print_status "Project root: $PROJECT_ROOT"
|
||||
|
||||
# Validate project structure
|
||||
validate_project
|
||||
```
|
||||
|
||||
**After**:
|
||||
```bash
|
||||
main() {
|
||||
print_status "ThrillWiki Server Start Script Starting..."
|
||||
print_status "This script works whether servers are currently running or not."
|
||||
print_status "Project root: $PROJECT_ROOT"
|
||||
|
||||
# Set up signal traps EARLY - before any long-running operations
|
||||
print_status "Setting up signal handlers for graceful shutdown..."
|
||||
trap 'graceful_shutdown' INT TERM
|
||||
|
||||
# Validate project structure
|
||||
validate_project
|
||||
```
|
||||
|
||||
### 2. Prevent Recursive Signal Handling
|
||||
|
||||
**Location**: `graceful_shutdown()` function, line ~53
|
||||
|
||||
**Before**:
|
||||
```bash
|
||||
graceful_shutdown() {
|
||||
if [ "$CLEANUP_PERFORMED" = true ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
CLEANUP_PERFORMED=true
|
||||
|
||||
print_warning "Received shutdown signal - performing graceful shutdown..."
|
||||
```
|
||||
|
||||
**After**:
|
||||
```bash
|
||||
graceful_shutdown() {
|
||||
if [ "$CLEANUP_PERFORMED" = true ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
CLEANUP_PERFORMED=true
|
||||
|
||||
print_warning "Received shutdown signal - performing graceful shutdown..."
|
||||
|
||||
# Disable further signal handling to prevent recursive calls
|
||||
trap - INT TERM
|
||||
```
|
||||
|
||||
### 3. Improved Background Process Signal Handling
|
||||
|
||||
**Location**: `start_backend()` and `start_frontend()` functions
|
||||
|
||||
**Backend Changes** (line ~227):
|
||||
```bash
|
||||
uv run python manage.py runserver_plus 8000 --verbosity=2 &
|
||||
BACKEND_PID=$!
|
||||
|
||||
# Make sure the background process can receive signals
|
||||
disown -h "$BACKEND_PID" 2>/dev/null || true
|
||||
```
|
||||
|
||||
**Frontend Changes** (line ~260):
|
||||
```bash
|
||||
pnpm vite --port 5173 --open --host localhost --debug &
|
||||
FRONTEND_PID=$!
|
||||
|
||||
# Make sure the background process can receive signals
|
||||
disown -h "$FRONTEND_PID" 2>/dev/null || true
|
||||
```
|
||||
|
||||
### 4. Streamlined wait_for_servers() Function
|
||||
|
||||
**Location**: `wait_for_servers()` function, line ~528
|
||||
|
||||
**Before**:
|
||||
```bash
|
||||
wait_for_servers() {
|
||||
# Set up signal traps only after servers are successfully running
|
||||
print_status "Setting up signal handlers for graceful shutdown..."
|
||||
trap 'graceful_shutdown' INT TERM
|
||||
|
||||
print_status "🚀 Servers are running! Press Ctrl+C for graceful shutdown."
|
||||
print_status "📋 Backend: http://localhost:8000 | Frontend: http://localhost:5173"
|
||||
|
||||
# Keep the script alive and wait for signals
|
||||
while [ "$CLEANUP_PERFORMED" != true ]; do
|
||||
# Check if both servers are still running
|
||||
if [ -n "$BACKEND_PID" ] && ! kill -0 "$BACKEND_PID" 2>/dev/null; then
|
||||
print_error "Backend server has stopped unexpectedly"
|
||||
graceful_shutdown
|
||||
fi
|
||||
|
||||
if [ -n "$FRONTEND_PID" ] && ! kill -0 "$FRONTEND_PID" 2>/dev/null; then
|
||||
print_error "Frontend server has stopped unexpectedly"
|
||||
graceful_shutdown
|
||||
fi
|
||||
|
||||
sleep 2
|
||||
done
|
||||
}
|
||||
```
|
||||
|
||||
**After**:
|
||||
```bash
|
||||
wait_for_servers() {
|
||||
print_status "🚀 Servers are running! Press Ctrl+C for graceful shutdown."
|
||||
print_status "📋 Backend: http://localhost:8000 | Frontend: http://localhost:5173"
|
||||
|
||||
# Keep the script alive and wait for signals
|
||||
while [ "$CLEANUP_PERFORMED" != true ]; do
|
||||
# Check if both servers are still running
|
||||
if [ -n "$BACKEND_PID" ] && ! kill -0 "$BACKEND_PID" 2>/dev/null; then
|
||||
print_error "Backend server has stopped unexpectedly"
|
||||
graceful_shutdown
|
||||
break
|
||||
fi
|
||||
|
||||
if [ -n "$FRONTEND_PID" ] && ! kill -0 "$FRONTEND_PID" 2>/dev/null; then
|
||||
print_error "Frontend server has stopped unexpectedly"
|
||||
graceful_shutdown
|
||||
break
|
||||
fi
|
||||
|
||||
# Use shorter sleep and check for signals more frequently
|
||||
sleep 1
|
||||
done
|
||||
}
|
||||
```
|
||||
|
||||
## Benefits of the Fix
|
||||
|
||||
1. **Immediate Signal Response**: Ctrl+C now works immediately at any point during script execution, including during startup.
|
||||
|
||||
2. **Proper Cleanup**: All background processes are properly terminated when the script receives a signal.
|
||||
|
||||
3. **No Orphaned Processes**: The `disown -h` command ensures background processes receive signals while preventing shell job control interference.
|
||||
|
||||
4. **Faster Response**: Reduced sleep interval from 2 to 1 second makes signal handling more responsive.
|
||||
|
||||
5. **Robust Error Handling**: Prevents recursive signal calls and ensures cleanup only happens once.
|
||||
|
||||
## Testing
|
||||
|
||||
The fix was validated by:
|
||||
- Running syntax validation: `bash -n shared/scripts/start-servers.sh` (passed)
|
||||
- Ensuring script permissions are correct: `chmod +x shared/scripts/start-servers.sh`
|
||||
|
||||
## Usage
|
||||
|
||||
The script now properly responds to Ctrl+C at any time during execution:
|
||||
|
||||
```bash
|
||||
./shared/scripts/start-servers.sh
|
||||
# Press Ctrl+C at any time for graceful shutdown
|
||||
```
|
||||
|
||||
The script will:
|
||||
1. Display "Received shutdown signal - performing graceful shutdown..."
|
||||
2. Stop the backend server (Django with runserver_plus)
|
||||
3. Stop the frontend server (Vite)
|
||||
4. Clean up PID files
|
||||
5. Exit gracefully
|
||||
|
||||
## Technical Notes
|
||||
|
||||
- Signal traps are set early in the `main()` function before any long-running operations
|
||||
- The `disown -h` command removes background processes from job control while keeping them as child processes that can receive signals
|
||||
- The `trap - INT TERM` command in `graceful_shutdown()` prevents recursive signal handling
|
||||
- The monitoring loop includes explicit `break` statements to exit cleanly after cleanup
|
||||
575
shared/scripts/start-servers.sh
Executable file
575
shared/scripts/start-servers.sh
Executable file
@@ -0,0 +1,575 @@
|
||||
#!/bin/bash
|
||||
|
||||
# ThrillWiki Server Start Script
|
||||
# Stops any running servers, clears caches, runs migrations, and starts both servers
|
||||
# Works whether servers are currently running or not
|
||||
# Usage: ./start-servers.sh
|
||||
|
||||
set -e # Exit on any error
|
||||
|
||||
# Global variables for process management
|
||||
BACKEND_PID=""
|
||||
FRONTEND_PID=""
|
||||
CLEANUP_PERFORMED=false
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Script directory and project root
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
BACKEND_DIR="$PROJECT_ROOT/backend"
|
||||
FRONTEND_DIR="$PROJECT_ROOT/frontend"
|
||||
|
||||
# Function to print colored output
|
||||
print_status() {
|
||||
echo -e "${BLUE}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
print_success() {
|
||||
echo -e "${GREEN}[SUCCESS]${NC} $1"
|
||||
}
|
||||
|
||||
print_warning() {
|
||||
echo -e "${YELLOW}[WARNING]${NC} $1"
|
||||
}
|
||||
|
||||
print_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
# Function for graceful shutdown
|
||||
graceful_shutdown() {
|
||||
if [ "$CLEANUP_PERFORMED" = true ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
CLEANUP_PERFORMED=true
|
||||
|
||||
print_warning "Received shutdown signal - performing graceful shutdown..."
|
||||
|
||||
# Disable further signal handling to prevent recursive calls
|
||||
trap - INT TERM
|
||||
|
||||
# Kill backend server if running
|
||||
if [ -n "$BACKEND_PID" ] && kill -0 "$BACKEND_PID" 2>/dev/null; then
|
||||
print_status "Stopping backend server (PID: $BACKEND_PID)..."
|
||||
kill -TERM "$BACKEND_PID" 2>/dev/null || true
|
||||
|
||||
# Wait up to 10 seconds for graceful shutdown
|
||||
local count=0
|
||||
while [ $count -lt 10 ] && kill -0 "$BACKEND_PID" 2>/dev/null; do
|
||||
sleep 1
|
||||
count=$((count + 1))
|
||||
done
|
||||
|
||||
# Force kill if still running
|
||||
if kill -0 "$BACKEND_PID" 2>/dev/null; then
|
||||
print_warning "Force killing backend server..."
|
||||
kill -KILL "$BACKEND_PID" 2>/dev/null || true
|
||||
fi
|
||||
print_success "Backend server stopped"
|
||||
else
|
||||
print_status "Backend server not running or already stopped"
|
||||
fi
|
||||
|
||||
# Kill frontend server if running
|
||||
if [ -n "$FRONTEND_PID" ] && kill -0 "$FRONTEND_PID" 2>/dev/null; then
|
||||
print_status "Stopping frontend server (PID: $FRONTEND_PID)..."
|
||||
kill -TERM "$FRONTEND_PID" 2>/dev/null || true
|
||||
|
||||
# Wait up to 10 seconds for graceful shutdown
|
||||
local count=0
|
||||
while [ $count -lt 10 ] && kill -0 "$FRONTEND_PID" 2>/dev/null; do
|
||||
sleep 1
|
||||
count=$((count + 1))
|
||||
done
|
||||
|
||||
# Force kill if still running
|
||||
if kill -0 "$FRONTEND_PID" 2>/dev/null; then
|
||||
print_warning "Force killing frontend server..."
|
||||
kill -KILL "$FRONTEND_PID" 2>/dev/null || true
|
||||
fi
|
||||
print_success "Frontend server stopped"
|
||||
else
|
||||
print_status "Frontend server not running or already stopped"
|
||||
fi
|
||||
|
||||
# Clear PID files if they exist
|
||||
if [ -f "$PROJECT_ROOT/shared/logs/backend.pid" ]; then
|
||||
rm -f "$PROJECT_ROOT/shared/logs/backend.pid"
|
||||
fi
|
||||
if [ -f "$PROJECT_ROOT/shared/logs/frontend.pid" ]; then
|
||||
rm -f "$PROJECT_ROOT/shared/logs/frontend.pid"
|
||||
fi
|
||||
|
||||
print_success "Graceful shutdown completed"
|
||||
exit 0
|
||||
}
|
||||
|
||||
# Function to kill processes by pattern
|
||||
kill_processes() {
|
||||
local pattern="$1"
|
||||
local description="$2"
|
||||
|
||||
print_status "Checking for $description processes..."
|
||||
|
||||
# Find and kill processes
|
||||
local pids=$(pgrep -f "$pattern" 2>/dev/null || true)
|
||||
|
||||
if [ -n "$pids" ]; then
|
||||
print_status "Found $description processes, stopping them..."
|
||||
echo "$pids" | xargs kill -TERM 2>/dev/null || true
|
||||
sleep 2
|
||||
|
||||
# Force kill if still running
|
||||
local remaining_pids=$(pgrep -f "$pattern" 2>/dev/null || true)
|
||||
if [ -n "$remaining_pids" ]; then
|
||||
print_warning "Force killing remaining $description processes..."
|
||||
echo "$remaining_pids" | xargs kill -KILL 2>/dev/null || true
|
||||
fi
|
||||
|
||||
print_success "$description processes stopped"
|
||||
else
|
||||
print_status "No $description processes found (this is fine)"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to clear Django cache
|
||||
clear_django_cache() {
|
||||
print_status "Clearing Django cache..."
|
||||
|
||||
cd "$BACKEND_DIR"
|
||||
|
||||
# Clear Django cache
|
||||
if command -v uv >/dev/null 2>&1; then
|
||||
if ! uv run manage.py clear_cache 2>clear_cache_error.log; then
|
||||
print_error "Django clear_cache command failed:"
|
||||
cat clear_cache_error.log
|
||||
rm -f clear_cache_error.log
|
||||
exit 1
|
||||
else
|
||||
rm -f clear_cache_error.log
|
||||
fi
|
||||
else
|
||||
print_error "uv not found! Please install uv first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Remove Python cache files
|
||||
find . -type d -name "__pycache__" -exec rm -rf {} + 2>/dev/null || true
|
||||
find . -name "*.pyc" -delete 2>/dev/null || true
|
||||
find . -name "*.pyo" -delete 2>/dev/null || true
|
||||
|
||||
print_success "Django cache cleared"
|
||||
}
|
||||
|
||||
# Function to clear frontend cache
|
||||
clear_frontend_cache() {
|
||||
print_status "Clearing frontend cache..."
|
||||
|
||||
cd "$FRONTEND_DIR"
|
||||
|
||||
# Remove node_modules/.cache if it exists
|
||||
if [ -d "node_modules/.cache" ]; then
|
||||
rm -rf node_modules/.cache
|
||||
print_status "Removed node_modules/.cache"
|
||||
fi
|
||||
|
||||
# Remove .nuxt cache if it exists (for Nuxt projects)
|
||||
if [ -d ".nuxt" ]; then
|
||||
rm -rf .nuxt
|
||||
print_status "Removed .nuxt cache"
|
||||
fi
|
||||
|
||||
# Remove dist/build directories
|
||||
if [ -d "dist" ]; then
|
||||
rm -rf dist
|
||||
print_status "Removed dist directory"
|
||||
fi
|
||||
|
||||
if [ -d "build" ]; then
|
||||
rm -rf build
|
||||
print_status "Removed build directory"
|
||||
fi
|
||||
|
||||
# Clear pnpm cache
|
||||
if command -v pnpm >/dev/null 2>&1; then
|
||||
pnpm store prune 2>/dev/null || print_warning "Could not prune pnpm store"
|
||||
else
|
||||
print_error "pnpm not found! Please install pnpm first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
print_success "Frontend cache cleared"
|
||||
}
|
||||
|
||||
# Function to run Django migrations
|
||||
run_migrations() {
|
||||
print_status "Running Django migrations..."
|
||||
|
||||
cd "$BACKEND_DIR"
|
||||
|
||||
# Check for pending migrations
|
||||
if uv run python manage.py showmigrations --plan | grep -q "\[ \]"; then
|
||||
print_status "Pending migrations found, applying..."
|
||||
uv run python manage.py migrate
|
||||
print_success "Migrations applied successfully"
|
||||
else
|
||||
print_status "No pending migrations found"
|
||||
fi
|
||||
|
||||
# Run any custom management commands if needed
|
||||
# uv run python manage.py collectstatic --noinput --clear 2>/dev/null || print_warning "collectstatic failed or not needed"
|
||||
}
|
||||
|
||||
# Function to start backend server
|
||||
start_backend() {
|
||||
print_status "Starting Django backend server with runserver_plus (verbose output)..."
|
||||
|
||||
cd "$BACKEND_DIR"
|
||||
|
||||
# Start Django development server with runserver_plus for enhanced features and verbose output
|
||||
print_status "Running: uv run python manage.py runserver_plus 8000 --verbosity=2"
|
||||
uv run python manage.py runserver_plus 8000 --verbosity=2 &
|
||||
BACKEND_PID=$!
|
||||
|
||||
# Make sure the background process can receive signals
|
||||
disown -h "$BACKEND_PID" 2>/dev/null || true
|
||||
|
||||
# Wait a moment and check if it started successfully
|
||||
sleep 3
|
||||
if kill -0 $BACKEND_PID 2>/dev/null; then
|
||||
print_success "Backend server started (PID: $BACKEND_PID)"
|
||||
echo $BACKEND_PID > ../shared/logs/backend.pid
|
||||
else
|
||||
print_error "Failed to start backend server"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to start frontend server
|
||||
start_frontend() {
|
||||
print_status "Starting frontend server with verbose output..."
|
||||
|
||||
cd "$FRONTEND_DIR"
|
||||
|
||||
# Install dependencies if node_modules doesn't exist or package.json is newer
|
||||
if [ ! -d "node_modules" ] || [ "package.json" -nt "node_modules" ]; then
|
||||
print_status "Installing/updating frontend dependencies..."
|
||||
pnpm install
|
||||
fi
|
||||
|
||||
# Start frontend development server using Vite with explicit port, auto-open, and verbose output
|
||||
# --port 5173: Use standard Vite port
|
||||
# --open: Automatically open browser when ready
|
||||
# --host localhost: Ensure it binds to localhost
|
||||
# --debug: Enable debug logging
|
||||
print_status "Starting Vite development server with verbose output and auto-browser opening..."
|
||||
print_status "Running: pnpm vite --port 5173 --open --host localhost --debug"
|
||||
pnpm vite --port 5173 --open --host localhost --debug &
|
||||
FRONTEND_PID=$!
|
||||
|
||||
# Make sure the background process can receive signals
|
||||
disown -h "$FRONTEND_PID" 2>/dev/null || true
|
||||
|
||||
# Wait a moment and check if it started successfully
|
||||
sleep 3
|
||||
if kill -0 $FRONTEND_PID 2>/dev/null; then
|
||||
print_success "Frontend server started (PID: $FRONTEND_PID) - browser should open automatically"
|
||||
echo $FRONTEND_PID > ../shared/logs/frontend.pid
|
||||
else
|
||||
print_error "Failed to start frontend server"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to detect operating system
|
||||
detect_os() {
|
||||
case "$(uname -s)" in
|
||||
Darwin*) echo "macos";;
|
||||
Linux*) echo "linux";;
|
||||
*) echo "unknown";;
|
||||
esac
|
||||
}
|
||||
|
||||
# Function to open browser on the appropriate OS
|
||||
open_browser() {
|
||||
local url="$1"
|
||||
local os=$(detect_os)
|
||||
|
||||
print_status "Opening browser to $url..."
|
||||
|
||||
case "$os" in
|
||||
"macos")
|
||||
if command -v open >/dev/null 2>&1; then
|
||||
open "$url" 2>/dev/null || print_warning "Failed to open browser automatically"
|
||||
else
|
||||
print_warning "Cannot open browser: 'open' command not available"
|
||||
fi
|
||||
;;
|
||||
"linux")
|
||||
if command -v xdg-open >/dev/null 2>&1; then
|
||||
xdg-open "$url" 2>/dev/null || print_warning "Failed to open browser automatically"
|
||||
else
|
||||
print_warning "Cannot open browser: 'xdg-open' command not available"
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
print_warning "Cannot open browser automatically: Unsupported operating system"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Function to verify frontend is responding (simplified since port is known)
|
||||
verify_frontend_ready() {
|
||||
local frontend_url="http://localhost:5173"
|
||||
local max_checks=15
|
||||
local check=0
|
||||
|
||||
print_status "Verifying frontend server is responding at $frontend_url..."
|
||||
|
||||
while [ $check -lt $max_checks ]; do
|
||||
local response_code=$(curl -s -o /dev/null -w "%{http_code}" "$frontend_url" 2>/dev/null)
|
||||
if [ "$response_code" = "200" ] || [ "$response_code" = "301" ] || [ "$response_code" = "302" ] || [ "$response_code" = "404" ]; then
|
||||
print_success "Frontend server is responding (HTTP $response_code)"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [ $((check % 3)) -eq 0 ]; then
|
||||
print_status "Waiting for frontend to respond... (attempt $((check + 1))/$max_checks)"
|
||||
fi
|
||||
sleep 2
|
||||
check=$((check + 1))
|
||||
done
|
||||
|
||||
print_warning "Frontend may still be starting up"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Function to verify servers are responding
|
||||
verify_servers_ready() {
|
||||
print_status "Verifying both servers are responding..."
|
||||
|
||||
# Check backend
|
||||
local backend_ready=false
|
||||
local frontend_ready=false
|
||||
local max_checks=10
|
||||
local check=0
|
||||
|
||||
while [ $check -lt $max_checks ]; do
|
||||
# Check backend
|
||||
if [ "$backend_ready" = false ]; then
|
||||
local backend_response=$(curl -s -o /dev/null -w "%{http_code}" "http://localhost:8000" 2>/dev/null)
|
||||
if [ "$backend_response" = "200" ] || [ "$backend_response" = "301" ] || [ "$backend_response" = "302" ] || [ "$backend_response" = "404" ]; then
|
||||
print_success "Backend server is responding (HTTP $backend_response)"
|
||||
backend_ready=true
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check frontend
|
||||
if [ "$frontend_ready" = false ]; then
|
||||
local frontend_response=$(curl -s -o /dev/null -w "%{http_code}" "http://localhost:5173" 2>/dev/null)
|
||||
if [ "$frontend_response" = "200" ] || [ "$frontend_response" = "301" ] || [ "$frontend_response" = "302" ] || [ "$frontend_response" = "404" ]; then
|
||||
print_success "Frontend server is responding (HTTP $frontend_response)"
|
||||
frontend_ready=true
|
||||
fi
|
||||
fi
|
||||
|
||||
# Both ready?
|
||||
if [ "$backend_ready" = true ] && [ "$frontend_ready" = true ]; then
|
||||
print_success "Both servers are responding!"
|
||||
return 0
|
||||
fi
|
||||
|
||||
sleep 2
|
||||
check=$((check + 1))
|
||||
done
|
||||
|
||||
# Show status of what's working
|
||||
if [ "$backend_ready" = true ]; then
|
||||
print_success "Backend is ready at http://localhost:8000"
|
||||
else
|
||||
print_warning "Backend may still be starting up"
|
||||
fi
|
||||
|
||||
if [ "$frontend_ready" = true ]; then
|
||||
print_success "Frontend is ready at http://localhost:5173"
|
||||
else
|
||||
print_warning "Frontend may still be starting up"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to create logs directory if it doesn't exist
|
||||
ensure_logs_dir() {
|
||||
local logs_dir="$PROJECT_ROOT/shared/logs"
|
||||
if [ ! -d "$logs_dir" ]; then
|
||||
mkdir -p "$logs_dir"
|
||||
print_status "Created logs directory: $logs_dir"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to validate project structure
|
||||
validate_project() {
|
||||
if [ ! -d "$BACKEND_DIR" ]; then
|
||||
print_error "Backend directory not found: $BACKEND_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -d "$FRONTEND_DIR" ]; then
|
||||
print_error "Frontend directory not found: $FRONTEND_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f "$BACKEND_DIR/manage.py" ]; then
|
||||
print_error "Django manage.py not found in: $BACKEND_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -f "$FRONTEND_DIR/package.json" ]; then
|
||||
print_error "Frontend package.json not found in: $FRONTEND_DIR"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to kill processes using specific ports
|
||||
kill_port_processes() {
|
||||
local port="$1"
|
||||
local description="$2"
|
||||
|
||||
print_status "Checking for processes using port $port ($description)..."
|
||||
|
||||
# Find processes using the specific port
|
||||
local pids=$(lsof -ti :$port 2>/dev/null || true)
|
||||
|
||||
if [ -n "$pids" ]; then
|
||||
print_warning "Found processes using port $port, killing them..."
|
||||
echo "$pids" | xargs kill -TERM 2>/dev/null || true
|
||||
sleep 2
|
||||
|
||||
# Force kill if still running
|
||||
local remaining_pids=$(lsof -ti :$port 2>/dev/null || true)
|
||||
if [ -n "$remaining_pids" ]; then
|
||||
print_warning "Force killing remaining processes on port $port..."
|
||||
echo "$remaining_pids" | xargs kill -KILL 2>/dev/null || true
|
||||
fi
|
||||
|
||||
print_success "Port $port cleared"
|
||||
else
|
||||
print_status "Port $port is available"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to check and clear required ports
|
||||
check_and_clear_ports() {
|
||||
print_status "Checking and clearing required ports..."
|
||||
|
||||
# Kill processes using our specific ports
|
||||
kill_port_processes 8000 "Django backend"
|
||||
kill_port_processes 5173 "Frontend Vite"
|
||||
}
|
||||
|
||||
# Main execution function
|
||||
main() {
|
||||
print_status "ThrillWiki Server Start Script Starting..."
|
||||
print_status "This script works whether servers are currently running or not."
|
||||
print_status "Project root: $PROJECT_ROOT"
|
||||
|
||||
# Set up signal traps EARLY - before any long-running operations
|
||||
print_status "Setting up signal handlers for graceful shutdown..."
|
||||
trap 'graceful_shutdown' INT TERM
|
||||
|
||||
# Validate project structure
|
||||
validate_project
|
||||
|
||||
# Ensure logs directory exists
|
||||
ensure_logs_dir
|
||||
|
||||
# Check and clear ports
|
||||
check_and_clear_ports
|
||||
|
||||
# Kill existing server processes (if any)
|
||||
print_status "=== Stopping Any Running Servers ==="
|
||||
print_status "Note: It's perfectly fine if no servers are currently running"
|
||||
kill_processes "manage.py runserver" "Django backend"
|
||||
kill_processes "pnpm.*dev\|npm.*dev\|yarn.*dev\|node.*dev" "Frontend development"
|
||||
kill_processes "uvicorn\|gunicorn" "Python web servers"
|
||||
|
||||
# Clear caches
|
||||
print_status "=== Clearing Caches ==="
|
||||
clear_django_cache
|
||||
clear_frontend_cache
|
||||
|
||||
# Run migrations
|
||||
print_status "=== Running Migrations ==="
|
||||
run_migrations
|
||||
|
||||
# Start servers
|
||||
print_status "=== Starting Servers ==="
|
||||
|
||||
# Start backend first
|
||||
if start_backend; then
|
||||
print_success "Backend server is running"
|
||||
else
|
||||
print_error "Failed to start backend server"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Start frontend
|
||||
if start_frontend; then
|
||||
print_success "Frontend server is running"
|
||||
else
|
||||
print_error "Failed to start frontend server"
|
||||
print_status "Backend server is still running"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Verify servers are responding
|
||||
print_status "=== Verifying Servers ==="
|
||||
verify_servers_ready
|
||||
|
||||
# Final status
|
||||
print_status "=== Server Status ==="
|
||||
print_success "✅ Backend server: http://localhost:8000 (Django with runserver_plus)"
|
||||
print_success "✅ Frontend server: http://localhost:5173 (Vite with verbose output)"
|
||||
print_status "🌐 Browser should have opened automatically via Vite --open"
|
||||
print_status "🔧 To stop servers, use: kill \$(cat $PROJECT_ROOT/shared/logs/backend.pid) \$(cat $PROJECT_ROOT/shared/logs/frontend.pid)"
|
||||
print_status "📋 Both servers are running with verbose output directly in your terminal"
|
||||
|
||||
print_success "🚀 All servers started successfully with full verbose output!"
|
||||
|
||||
# Keep the script running and wait for signals
|
||||
wait_for_servers
|
||||
}
|
||||
|
||||
# Wait for servers function to keep script running and handle signals
|
||||
wait_for_servers() {
|
||||
print_status "🚀 Servers are running! Press Ctrl+C for graceful shutdown."
|
||||
print_status "📋 Backend: http://localhost:8000 | Frontend: http://localhost:5173"
|
||||
|
||||
# Keep the script alive and wait for signals
|
||||
while [ "$CLEANUP_PERFORMED" != true ]; do
|
||||
# Check if both servers are still running
|
||||
if [ -n "$BACKEND_PID" ] && ! kill -0 "$BACKEND_PID" 2>/dev/null; then
|
||||
print_error "Backend server has stopped unexpectedly"
|
||||
graceful_shutdown
|
||||
break
|
||||
fi
|
||||
|
||||
if [ -n "$FRONTEND_PID" ] && ! kill -0 "$FRONTEND_PID" 2>/dev/null; then
|
||||
print_error "Frontend server has stopped unexpectedly"
|
||||
graceful_shutdown
|
||||
break
|
||||
fi
|
||||
|
||||
# Use shorter sleep and check for signals more frequently
|
||||
sleep 1
|
||||
done
|
||||
}
|
||||
|
||||
# Run main function (no traps set up initially)
|
||||
main "$@"
|
||||
Reference in New Issue
Block a user