mirror of
https://github.com/pacnpal/Pac-cogs.git
synced 2025-12-20 02:41:06 -05:00
fixed issues
This commit is contained in:
@@ -1,237 +1,138 @@
|
|||||||
# VideoArchiver Cog for Red-DiscordBot
|
# VideoArchiver Cog
|
||||||
|
|
||||||
A powerful video archiving cog that automatically downloads and reposts videos from monitored channels, with support for GPU-accelerated compression, multi-video processing, and role-based permissions.
|
A Red-DiscordBot cog for automatically archiving videos from monitored Discord channels.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
- **Hardware-Accelerated Video Processing**:
|
- Automatically detects and downloads videos from monitored channels
|
||||||
- NVIDIA GPU support using NVENC with advanced encoding options
|
- Supports multiple video hosting platforms through yt-dlp
|
||||||
- AMD GPU support using AMF with quality preservation
|
- Enhanced queue system with priority processing and performance metrics
|
||||||
- Intel GPU support using QuickSync with look-ahead
|
- Configurable video quality and format
|
||||||
- ARM64/aarch64 support with V4L2 M2M encoder
|
- Role-based access control
|
||||||
- Multi-core CPU optimization with advanced parameters
|
- Automatic file cleanup
|
||||||
- Automatic GPU fallback to CPU if hardware encoding fails
|
- Hardware-accelerated video processing (when available)
|
||||||
- **Smart Video Processing**:
|
- Customizable notification messages
|
||||||
- Content-aware video analysis
|
- Queue persistence across bot restarts
|
||||||
- Dark scene detection and optimization
|
|
||||||
- Motion detection and adaptation
|
## File Structure
|
||||||
- Dynamic audio bitrate allocation
|
|
||||||
- Intelligent quality preservation
|
The cog is organized into several modules for better maintainability:
|
||||||
- Only compresses when needed
|
|
||||||
- Concurrent video processing
|
- `video_archiver.py`: Main cog class and entry point
|
||||||
- Default 8MB file size limit
|
- `commands.py`: Discord command handlers
|
||||||
- **Role-Based Access**:
|
- `config_manager.py`: Guild configuration management
|
||||||
- Restrict archiving to specific roles
|
- `processor.py`: Video processing logic
|
||||||
- Default allows all users
|
- `enhanced_queue.py`: Advanced queue management system
|
||||||
- Per-guild role configuration
|
- `update_checker.py`: yt-dlp update management
|
||||||
- **Wide Platform Support**:
|
- `utils.py`: Utility functions and classes
|
||||||
- Support for multiple video platforms via [yt-dlp](https://github.com/yt-dlp/yt-dlp)
|
- `ffmpeg_manager.py`: FFmpeg configuration and hardware acceleration
|
||||||
- Configurable site whitelist
|
- `exceptions.py`: Custom exception classes
|
||||||
- Automatic quality selection
|
|
||||||
- **Automatic Updates**:
|
|
||||||
- Automatic yt-dlp update checking
|
|
||||||
- Semantic version comparison
|
|
||||||
- Bot owner notifications for new versions
|
|
||||||
- Easy update command
|
|
||||||
- Configurable update notifications
|
|
||||||
- Retries for update operations
|
|
||||||
- **Error Handling & Logging**:
|
|
||||||
- Detailed error logging to Discord channels
|
|
||||||
- Full error tracebacks for debugging
|
|
||||||
- Automatic retries for Discord operations
|
|
||||||
- Proper resource cleanup
|
|
||||||
- Task tracking and management
|
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
To install this cog, follow these steps:
|
1. Install the cog using Red's cog manager:
|
||||||
|
|
||||||
1. Ensure you have Red-DiscordBot V3 installed.
|
|
||||||
2. Add the repository to your bot:
|
|
||||||
|
|
||||||
```
|
|
||||||
[p]repo add Pac-cogs https://github.com/pacnpal/Pac-cogs
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Install the VideoArchiver cog:
|
|
||||||
|
|
||||||
```
|
|
||||||
[p]cog install Pac-cogs videoarchiver
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Load the cog:
|
|
||||||
|
|
||||||
```
|
|
||||||
[p]load videoarchiver
|
|
||||||
```
|
|
||||||
|
|
||||||
Replace `[p]` with your bot's prefix.
|
|
||||||
|
|
||||||
The required dependencies (yt-dlp, ffmpeg-python, requests, aiohttp) will be installed automatically. You will also need FFmpeg installed on your system - the cog will attempt to download and manage FFmpeg automatically if it's not found.
|
|
||||||
|
|
||||||
### Important: Keeping yt-dlp Updated
|
|
||||||
|
|
||||||
The cog relies on [yt-dlp](https://github.com/yt-dlp/yt-dlp) for video downloading. Video platforms frequently update their sites, which may break video downloading if yt-dlp is outdated. The cog will automatically check for updates and notify the bot owner when a new version is available.
|
|
||||||
|
|
||||||
To update yt-dlp:
|
|
||||||
```bash
|
```bash
|
||||||
[p]videoarchiver updateytdlp
|
[p]repo add videoarchiver <repository_url>
|
||||||
|
[p]cog install videoarchiver
|
||||||
```
|
```
|
||||||
|
|
||||||
You can also disable update notifications per guild:
|
2. Load the cog:
|
||||||
```bash
|
```bash
|
||||||
[p]videoarchiver toggleupdates
|
[p]load videoarchiver
|
||||||
```
|
```
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
The cog supports both slash commands and traditional prefix commands. Use whichever style you prefer.
|
Use the following commands to configure the cog:
|
||||||
|
|
||||||
### Channel Setup
|
### Channel Settings
|
||||||
```
|
- `[p]va setchannel <channel>`: Set the archive channel
|
||||||
/videoarchiver setchannel #archive-channel # Set archive channel
|
- `[p]va setnotification <channel>`: Set the notification channel
|
||||||
/videoarchiver setnotification #notify-channel # Set notification channel
|
- `[p]va setlogchannel <channel>`: Set the log channel
|
||||||
/videoarchiver setlogchannel #log-channel # Set log channel for errors/notifications
|
- `[p]va addmonitor <channel>`: Add a channel to monitor
|
||||||
/videoarchiver addmonitor #videos-channel # Add channel to monitor
|
- `[p]va removemonitor <channel>`: Remove a monitored channel
|
||||||
/videoarchiver removemonitor #channel # Remove monitored channel
|
|
||||||
|
|
||||||
# Legacy commands also supported:
|
|
||||||
[p]videoarchiver setchannel #channel
|
|
||||||
[p]videoarchiver setnotification #channel
|
|
||||||
etc.
|
|
||||||
```
|
|
||||||
|
|
||||||
### Role Management
|
### Role Management
|
||||||
```
|
- `[p]va addrole <role>`: Add a role allowed to trigger archiving
|
||||||
/videoarchiver addrole @role # Add role that can trigger archiving
|
- `[p]va removerole <role>`: Remove an allowed role
|
||||||
/videoarchiver removerole @role # Remove role from allowed list
|
- `[p]va listroles`: List allowed roles
|
||||||
/videoarchiver listroles # List all allowed roles (empty = all allowed)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Video Settings
|
### Video Settings
|
||||||
```
|
- `[p]va setformat <format>`: Set video format (e.g., mp4, webm)
|
||||||
/videoarchiver setformat mp4 # Set video format
|
- `[p]va setquality <pixels>`: Set maximum video quality (e.g., 1080)
|
||||||
/videoarchiver setquality 1080 # Set max quality (pixels)
|
- `[p]va setmaxsize <MB>`: Set maximum file size in MB
|
||||||
/videoarchiver setmaxsize 8 # Set max size (MB, default 8MB)
|
- `[p]va setconcurrent <count>`: Set number of concurrent downloads (1-5)
|
||||||
/videoarchiver toggledelete # Toggle file cleanup
|
|
||||||
```
|
|
||||||
|
|
||||||
### Message Settings
|
### Message Settings
|
||||||
```
|
- `[p]va setduration <hours>`: Set how long to keep archive messages
|
||||||
/videoarchiver setduration 24 # Set message duration (hours)
|
- `[p]va settemplate <template>`: Set archive message template
|
||||||
/videoarchiver settemplate "Archived video from {author}\nOriginal: {original_message}"
|
- `[p]va toggledelete`: Toggle deletion of local files after reposting
|
||||||
/videoarchiver enablesites # Configure allowed sites
|
|
||||||
```
|
|
||||||
|
|
||||||
### Update Settings
|
### Site Management
|
||||||
```
|
- `[p]va enablesites [sites...]`: Enable specific sites (empty for all)
|
||||||
/videoarchiver updateytdlp # Update yt-dlp to latest version
|
- `[p]va listsites`: List available and enabled sites
|
||||||
/videoarchiver toggleupdates # Toggle update notifications
|
|
||||||
```
|
|
||||||
|
|
||||||
## Architecture Support
|
### Queue Management
|
||||||
|
- `[p]va queue`: Show detailed queue status and metrics
|
||||||
|
- `[p]va clearqueue`: Clear the processing queue
|
||||||
|
- `[p]va queuemetrics`: Display queue performance metrics
|
||||||
|
|
||||||
The cog supports multiple architectures with intelligent hardware detection:
|
### Update Management
|
||||||
- x86_64/amd64: Full GPU support with automatic encoder testing
|
- `[p]va updateytdlp`: Update yt-dlp to latest version
|
||||||
- ARM64/aarch64: Hardware encoding with automatic capability detection
|
- `[p]va toggleupdates`: Toggle update notifications
|
||||||
- ARMv7 (32-bit): Optimized CPU encoding
|
|
||||||
- Apple Silicon (M1/M2): Native ARM support
|
|
||||||
|
|
||||||
Hardware acceleration features:
|
## Technical Details
|
||||||
- Automatic GPU detection and testing
|
|
||||||
- Fallback to CPU if GPU encoding fails
|
|
||||||
- Dynamic encoder parameter optimization
|
|
||||||
- Multi-pass encoding for better quality
|
|
||||||
- Content-aware encoding settings
|
|
||||||
|
|
||||||
## Error Handling
|
### Enhanced Queue System
|
||||||
|
The cog uses an advanced queue system with the following features:
|
||||||
|
- Priority-based processing (first URL in messages gets highest priority)
|
||||||
|
- Queue persistence across bot restarts
|
||||||
|
- Automatic memory management and cleanup
|
||||||
|
- Performance metrics tracking (success rate, processing times)
|
||||||
|
- Health monitoring with automatic issue detection
|
||||||
|
- Deadlock prevention
|
||||||
|
- Configurable cleanup intervals
|
||||||
|
- Size-limited queue to prevent memory issues
|
||||||
|
- Detailed status tracking per guild
|
||||||
|
|
||||||
The cog includes comprehensive error handling:
|
### Queue Metrics
|
||||||
|
The queue system tracks various performance metrics:
|
||||||
|
- Total processed videos
|
||||||
|
- Success/failure rates
|
||||||
|
- Average processing time
|
||||||
|
- Peak memory usage
|
||||||
|
- Queue size per guild/channel
|
||||||
|
- Processing history
|
||||||
|
- Cleanup statistics
|
||||||
|
|
||||||
1. **Discord API Operations**:
|
### Configuration Management
|
||||||
- Automatic retries for failed operations
|
- Settings are stored per guild
|
||||||
- Configurable retry attempts and delays
|
- Supports hot-reloading of configurations
|
||||||
- Proper error logging to Discord channels
|
- Automatic validation of settings
|
||||||
|
|
||||||
2. **Video Processing**:
|
### Error Handling
|
||||||
- Automatic GPU fallback if hardware encoding fails
|
- Comprehensive error logging
|
||||||
- Temporary file cleanup on errors
|
- Automatic retry mechanisms with configurable attempts
|
||||||
- Resource leak prevention
|
- Guild-specific error reporting
|
||||||
- Task cancellation handling
|
- Detailed failure tracking
|
||||||
|
|
||||||
3. **Update Management**:
|
### Performance Optimizations
|
||||||
- Proper version comparison
|
- Hardware-accelerated video processing when available
|
||||||
- Network timeout handling
|
- Efficient file handling with secure deletion
|
||||||
- Update notification retries
|
- Memory leak prevention through proper resource cleanup
|
||||||
- Error context preservation
|
- Automatic resource monitoring
|
||||||
|
- Periodic cleanup of old queue items
|
||||||
|
- Memory usage optimization
|
||||||
|
|
||||||
4. **Resource Management**:
|
## Requirements
|
||||||
- Proper task tracking and cleanup
|
|
||||||
- Component lifecycle management
|
|
||||||
- File handle cleanup
|
|
||||||
- Memory leak prevention
|
|
||||||
|
|
||||||
## Troubleshooting
|
- Python 3.8 or higher
|
||||||
|
- FFmpeg
|
||||||
1. **Permission Issues**:
|
- yt-dlp
|
||||||
- Bot needs "Manage Messages" permission
|
- Discord.py 2.0 or higher
|
||||||
- Bot needs "Attach Files" permission
|
- Red-DiscordBot V3
|
||||||
- Bot needs "Read Message History" permission
|
- psutil>=5.9.0
|
||||||
- Bot needs "Use Application Commands" for slash commands
|
|
||||||
|
|
||||||
2. **Video Processing Issues**:
|
|
||||||
- Check log channel for detailed error messages
|
|
||||||
- Ensure FFmpeg is properly installed
|
|
||||||
- Check GPU drivers are up to date
|
|
||||||
- Verify file permissions in the downloads directory
|
|
||||||
- Update yt-dlp if videos fail to download
|
|
||||||
|
|
||||||
3. **Role Issues**:
|
|
||||||
- Verify role hierarchy (bot's role must be higher than managed roles)
|
|
||||||
- Check if roles are properly configured
|
|
||||||
- Check log channel for permission errors
|
|
||||||
|
|
||||||
4. **Performance Issues**:
|
|
||||||
- Check available disk space
|
|
||||||
- Monitor system resource usage
|
|
||||||
- Check log channel for encoding errors
|
|
||||||
- Verify GPU availability and status
|
|
||||||
|
|
||||||
## Support
|
## Support
|
||||||
|
|
||||||
For support:
|
For issues and feature requests, please use the issue tracker on GitHub.
|
||||||
1. First, check the [Troubleshooting](#troubleshooting) section above
|
|
||||||
2. Check the log channel for detailed error messages
|
|
||||||
3. Update yt-dlp to the latest version:
|
|
||||||
```bash
|
|
||||||
[p]videoarchiver updateytdlp
|
|
||||||
```
|
|
||||||
4. If the issue persists after updating yt-dlp:
|
|
||||||
- Join the Red-DiscordBot server and ask in the #support channel
|
|
||||||
- Open an issue on GitHub with:
|
|
||||||
- Your Red-Bot version
|
|
||||||
- The output of `[p]pipinstall list`
|
|
||||||
- Steps to reproduce the issue
|
|
||||||
- Any error messages from the log channel
|
|
||||||
- Your hardware configuration (CPU/GPU)
|
|
||||||
|
|
||||||
## Contributing
|
|
||||||
|
|
||||||
Contributions are welcome! Please feel free to submit a Pull Request.
|
|
||||||
|
|
||||||
Before submitting an issue:
|
|
||||||
1. Update yt-dlp to the latest version first:
|
|
||||||
```bash
|
|
||||||
[p]videoarchiver updateytdlp
|
|
||||||
```
|
|
||||||
2. If the issue persists after updating yt-dlp, please include:
|
|
||||||
- Your Red-Bot version
|
|
||||||
- The output of `[p]pipinstall list`
|
|
||||||
- Steps to reproduce the issue
|
|
||||||
- Any error messages from the log channel
|
|
||||||
- Your hardware configuration (CPU/GPU)
|
|
||||||
- FFmpeg version and configuration
|
|
||||||
|
|
||||||
## License
|
|
||||||
|
|
||||||
This cog is licensed under the MIT License - see the [LICENSE](../LICENSE) file for details.
|
|
||||||
|
|||||||
@@ -1,16 +1,134 @@
|
|||||||
|
"""VideoArchiver cog for Red-DiscordBot"""
|
||||||
import logging
|
import logging
|
||||||
from redbot.core.bot import Red
|
import sys
|
||||||
from redbot.core import errors
|
from pathlib import Path
|
||||||
from .video_archiver import VideoArchiver
|
from typing import Optional
|
||||||
|
import asyncio
|
||||||
|
import pkg_resources
|
||||||
|
|
||||||
log = logging.getLogger("red.pacnpal.videoarchiver")
|
from redbot.core.bot import Red
|
||||||
|
from redbot.core.utils import get_end_user_data_statement
|
||||||
|
from redbot.core.errors import CogLoadError
|
||||||
|
from .video_archiver import VideoArchiver
|
||||||
|
from .exceptions import ProcessingError
|
||||||
|
|
||||||
|
__version__ = "1.0.0"
|
||||||
|
|
||||||
|
log = logging.getLogger("red.videoarchiver")
|
||||||
|
|
||||||
|
REQUIRED_PYTHON_VERSION = (3, 8, 0)
|
||||||
|
REQUIRED_PACKAGES = {
|
||||||
|
'yt-dlp': '2024.11.4',
|
||||||
|
'ffmpeg-python': '0.2.0',
|
||||||
|
'aiohttp': '3.8.0',
|
||||||
|
'packaging': '20.0',
|
||||||
|
}
|
||||||
|
|
||||||
|
def check_dependencies() -> Optional[str]:
|
||||||
|
"""Check if all required dependencies are met."""
|
||||||
|
# Check Python version
|
||||||
|
if sys.version_info < REQUIRED_PYTHON_VERSION:
|
||||||
|
return (
|
||||||
|
f"Python {'.'.join(map(str, REQUIRED_PYTHON_VERSION))} or higher is required. "
|
||||||
|
f"Current version: {'.'.join(map(str, sys.version_info[:3]))}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check required packages
|
||||||
|
missing_packages = []
|
||||||
|
outdated_packages = []
|
||||||
|
|
||||||
|
for package, min_version in REQUIRED_PACKAGES.items():
|
||||||
|
try:
|
||||||
|
installed_version = pkg_resources.get_distribution(package).version
|
||||||
|
if pkg_resources.parse_version(installed_version) < pkg_resources.parse_version(min_version):
|
||||||
|
outdated_packages.append(f"{package}>={min_version}")
|
||||||
|
except pkg_resources.DistributionNotFound:
|
||||||
|
missing_packages.append(f"{package}>={min_version}")
|
||||||
|
|
||||||
|
if missing_packages or outdated_packages:
|
||||||
|
error_msg = []
|
||||||
|
if missing_packages:
|
||||||
|
error_msg.append(f"Missing packages: {', '.join(missing_packages)}")
|
||||||
|
if outdated_packages:
|
||||||
|
error_msg.append(f"Outdated packages: {', '.join(outdated_packages)}")
|
||||||
|
return "\n".join(error_msg)
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
async def setup(bot: Red) -> None:
|
async def setup(bot: Red) -> None:
|
||||||
"""Load VideoArchiver cog with error handling."""
|
"""Load VideoArchiver cog with enhanced error handling."""
|
||||||
try:
|
try:
|
||||||
|
# Check dependencies
|
||||||
|
if dependency_error := check_dependencies():
|
||||||
|
raise CogLoadError(
|
||||||
|
f"Dependencies not met:\n{dependency_error}\n"
|
||||||
|
"Please install/upgrade the required packages."
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check for ffmpeg
|
||||||
|
try:
|
||||||
|
import ffmpeg
|
||||||
|
ffmpeg.probe('ffmpeg-version')
|
||||||
|
except Exception:
|
||||||
|
raise CogLoadError(
|
||||||
|
"FFmpeg is not installed or not found in PATH. "
|
||||||
|
"Please install FFmpeg before loading this cog."
|
||||||
|
)
|
||||||
|
|
||||||
|
# Initialize cog
|
||||||
cog = VideoArchiver(bot)
|
cog = VideoArchiver(bot)
|
||||||
await bot.add_cog(cog)
|
await bot.add_cog(cog)
|
||||||
log.info("VideoArchiver cog loaded successfully")
|
|
||||||
|
# Store cog instance for proper cleanup
|
||||||
|
bot._videoarchiver = cog
|
||||||
|
|
||||||
|
log.info(
|
||||||
|
f"VideoArchiver v{__version__} loaded successfully\n"
|
||||||
|
f"Python version: {sys.version_info[0]}.{sys.version_info[1]}.{sys.version_info[2]}\n"
|
||||||
|
f"Running on: {sys.platform}"
|
||||||
|
)
|
||||||
|
|
||||||
|
except CogLoadError as e:
|
||||||
|
log.error(f"Failed to load VideoArchiver: {str(e)}")
|
||||||
|
raise
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.error(f"Failed to load VideoArchiver cog: {str(e)}")
|
log.exception("Unexpected error loading VideoArchiver:", exc_info=e)
|
||||||
raise errors.CogLoadError("Failed to load VideoArchiver cog") from e
|
raise CogLoadError(f"Unexpected error: {str(e)}")
|
||||||
|
|
||||||
|
async def teardown(bot: Red) -> None:
|
||||||
|
"""Clean up when cog is unloaded."""
|
||||||
|
try:
|
||||||
|
# Get cog instance
|
||||||
|
cog = getattr(bot, '_videoarchiver', None)
|
||||||
|
if cog:
|
||||||
|
# Perform async cleanup
|
||||||
|
await cog.cog_unload()
|
||||||
|
# Remove stored instance
|
||||||
|
delattr(bot, '_videoarchiver')
|
||||||
|
|
||||||
|
log.info("VideoArchiver unloaded successfully")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
log.exception("Error during VideoArchiver teardown:", exc_info=e)
|
||||||
|
# Don't raise here to ensure clean unload even if cleanup fails
|
||||||
|
|
||||||
|
def get_data_statement() -> str:
|
||||||
|
"""Get the end user data statement."""
|
||||||
|
return """This cog stores the following user data:
|
||||||
|
1. User IDs for tracking video processing permissions
|
||||||
|
2. Message IDs and channel IDs for tracking processed videos
|
||||||
|
3. Guild-specific settings and configurations
|
||||||
|
|
||||||
|
Data is stored locally and is necessary for the cog's functionality.
|
||||||
|
No data is shared with external services.
|
||||||
|
|
||||||
|
Users can request data deletion by:
|
||||||
|
1. Removing the bot from their server
|
||||||
|
2. Using the bot's data deletion commands
|
||||||
|
3. Contacting the bot owner
|
||||||
|
|
||||||
|
Note: Video files are temporarily stored during processing and are
|
||||||
|
automatically deleted after successful upload or on error."""
|
||||||
|
|
||||||
|
# Set end user data statement
|
||||||
|
__red_end_user_data_statement__ = get_data_statement()
|
||||||
|
|||||||
304
videoarchiver/commands.py
Normal file
304
videoarchiver/commands.py
Normal file
@@ -0,0 +1,304 @@
|
|||||||
|
"""Discord commands for VideoArchiver"""
|
||||||
|
import discord
|
||||||
|
from redbot.core import commands, checks
|
||||||
|
from typing import Optional
|
||||||
|
import yt_dlp
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
class VideoArchiverCommands(commands.Cog):
|
||||||
|
"""Command handler for VideoArchiver"""
|
||||||
|
|
||||||
|
def __init__(self, bot, config_manager, update_checker, processor):
|
||||||
|
self.bot = bot
|
||||||
|
self.config = config_manager
|
||||||
|
self.update_checker = update_checker
|
||||||
|
self.processor = processor
|
||||||
|
|
||||||
|
@commands.hybrid_group(name="videoarchiver", aliases=["va"])
|
||||||
|
@commands.guild_only()
|
||||||
|
@commands.admin_or_permissions(administrator=True)
|
||||||
|
async def videoarchiver(self, ctx: commands.Context):
|
||||||
|
"""Video Archiver configuration commands"""
|
||||||
|
if ctx.invoked_subcommand is None:
|
||||||
|
embed = await self.config.format_settings_embed(ctx.guild)
|
||||||
|
await ctx.send(embed=embed)
|
||||||
|
|
||||||
|
@videoarchiver.command(name="updateytdlp")
|
||||||
|
@checks.is_owner()
|
||||||
|
async def update_ytdlp(self, ctx: commands.Context):
|
||||||
|
"""Update yt-dlp to the latest version"""
|
||||||
|
success, message = await self.update_checker.update_yt_dlp()
|
||||||
|
await ctx.send("✅ " + message if success else "❌ " + message)
|
||||||
|
|
||||||
|
@videoarchiver.command(name="toggleupdates")
|
||||||
|
@commands.admin_or_permissions(administrator=True)
|
||||||
|
async def toggle_update_check(self, ctx: commands.Context):
|
||||||
|
"""Toggle yt-dlp update notifications"""
|
||||||
|
state = await self.config.toggle_setting(ctx.guild.id, "disable_update_check")
|
||||||
|
status = "disabled" if state else "enabled"
|
||||||
|
await ctx.send(f"Update notifications {status}")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="addrole")
|
||||||
|
async def add_allowed_role(self, ctx: commands.Context, role: discord.Role):
|
||||||
|
"""Add a role that's allowed to trigger archiving"""
|
||||||
|
await self.config.add_to_list(ctx.guild.id, "allowed_roles", role.id)
|
||||||
|
await ctx.send(f"Added {role.name} to allowed roles")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="removerole")
|
||||||
|
async def remove_allowed_role(self, ctx: commands.Context, role: discord.Role):
|
||||||
|
"""Remove a role from allowed roles"""
|
||||||
|
await self.config.remove_from_list(ctx.guild.id, "allowed_roles", role.id)
|
||||||
|
await ctx.send(f"Removed {role.name} from allowed roles")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="listroles")
|
||||||
|
async def list_allowed_roles(self, ctx: commands.Context):
|
||||||
|
"""List all roles allowed to trigger archiving"""
|
||||||
|
roles = await self.config.get_setting(ctx.guild.id, "allowed_roles")
|
||||||
|
if not roles:
|
||||||
|
await ctx.send(
|
||||||
|
"No roles are currently allowed (all users can trigger archiving)"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
role_names = [
|
||||||
|
r.name for r in [ctx.guild.get_role(role_id) for role_id in roles] if r
|
||||||
|
]
|
||||||
|
await ctx.send(f"Allowed roles: {', '.join(role_names)}")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="setconcurrent")
|
||||||
|
async def set_concurrent_downloads(self, ctx: commands.Context, count: int):
|
||||||
|
"""Set the number of concurrent downloads (1-5)"""
|
||||||
|
if not 1 <= count <= 5:
|
||||||
|
await ctx.send("Concurrent downloads must be between 1 and 5")
|
||||||
|
return
|
||||||
|
await self.config.update_setting(ctx.guild.id, "concurrent_downloads", count)
|
||||||
|
await ctx.send(f"Concurrent downloads set to {count}")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="setchannel")
|
||||||
|
async def set_archive_channel(
|
||||||
|
self, ctx: commands.Context, channel: discord.TextChannel
|
||||||
|
):
|
||||||
|
"""Set the archive channel"""
|
||||||
|
await self.config.update_setting(ctx.guild.id, "archive_channel", channel.id)
|
||||||
|
await ctx.send(f"Archive channel set to {channel.mention}")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="setnotification")
|
||||||
|
async def set_notification_channel(
|
||||||
|
self, ctx: commands.Context, channel: discord.TextChannel
|
||||||
|
):
|
||||||
|
"""Set the notification channel (where archive messages appear)"""
|
||||||
|
await self.config.update_setting(
|
||||||
|
ctx.guild.id, "notification_channel", channel.id
|
||||||
|
)
|
||||||
|
await ctx.send(f"Notification channel set to {channel.mention}")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="setlogchannel")
|
||||||
|
async def set_log_channel(
|
||||||
|
self, ctx: commands.Context, channel: discord.TextChannel
|
||||||
|
):
|
||||||
|
"""Set the log channel for error messages and notifications"""
|
||||||
|
await self.config.update_setting(ctx.guild.id, "log_channel", channel.id)
|
||||||
|
await ctx.send(f"Log channel set to {channel.mention}")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="addmonitor")
|
||||||
|
async def add_monitored_channel(
|
||||||
|
self, ctx: commands.Context, channel: discord.TextChannel
|
||||||
|
):
|
||||||
|
"""Add a channel to monitor for videos"""
|
||||||
|
await self.config.add_to_list(ctx.guild.id, "monitored_channels", channel.id)
|
||||||
|
await ctx.send(f"Now monitoring {channel.mention} for videos")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="removemonitor")
|
||||||
|
async def remove_monitored_channel(
|
||||||
|
self, ctx: commands.Context, channel: discord.TextChannel
|
||||||
|
):
|
||||||
|
"""Remove a channel from monitoring"""
|
||||||
|
await self.config.remove_from_list(
|
||||||
|
ctx.guild.id, "monitored_channels", channel.id
|
||||||
|
)
|
||||||
|
await ctx.send(f"Stopped monitoring {channel.mention}")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="setformat")
|
||||||
|
async def set_video_format(self, ctx: commands.Context, format: str):
|
||||||
|
"""Set the video format (e.g., mp4, webm)"""
|
||||||
|
await self.config.update_setting(ctx.guild.id, "video_format", format.lower())
|
||||||
|
await ctx.send(f"Video format set to {format.lower()}")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="setquality")
|
||||||
|
async def set_video_quality(self, ctx: commands.Context, quality: int):
|
||||||
|
"""Set the maximum video quality in pixels (e.g., 1080)"""
|
||||||
|
await self.config.update_setting(ctx.guild.id, "video_quality", quality)
|
||||||
|
await ctx.send(f"Maximum video quality set to {quality}p")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="setmaxsize")
|
||||||
|
async def set_max_file_size(self, ctx: commands.Context, size: int):
|
||||||
|
"""Set the maximum file size in MB"""
|
||||||
|
await self.config.update_setting(ctx.guild.id, "max_file_size", size)
|
||||||
|
await ctx.send(f"Maximum file size set to {size}MB")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="toggledelete")
|
||||||
|
async def toggle_delete_after_repost(self, ctx: commands.Context):
|
||||||
|
"""Toggle whether to delete local files after reposting"""
|
||||||
|
state = await self.config.toggle_setting(ctx.guild.id, "delete_after_repost")
|
||||||
|
await ctx.send(f"Delete after repost: {state}")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="setduration")
|
||||||
|
async def set_message_duration(self, ctx: commands.Context, hours: int):
|
||||||
|
"""Set how long to keep archive messages (0 for permanent)"""
|
||||||
|
await self.config.update_setting(ctx.guild.id, "message_duration", hours)
|
||||||
|
await ctx.send(f"Archive message duration set to {hours} hours")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="settemplate")
|
||||||
|
async def set_message_template(self, ctx: commands.Context, *, template: str):
|
||||||
|
"""Set the archive message template. Use {author}, {url}, and {original_message} as placeholders"""
|
||||||
|
await self.config.update_setting(ctx.guild.id, "message_template", template)
|
||||||
|
await ctx.send(f"Archive message template set to:\n{template}")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="enablesites")
|
||||||
|
async def enable_sites(self, ctx: commands.Context, *sites: str):
|
||||||
|
"""Enable specific sites (leave empty for all sites)"""
|
||||||
|
sites = [s.lower() for s in sites]
|
||||||
|
if not sites:
|
||||||
|
await self.config.update_setting(ctx.guild.id, "enabled_sites", [])
|
||||||
|
await ctx.send("All sites enabled")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Verify sites are valid
|
||||||
|
with yt_dlp.YoutubeDL() as ydl:
|
||||||
|
valid_sites = set(ie.IE_NAME.lower() for ie in ydl._ies)
|
||||||
|
invalid_sites = [s for s in sites if s not in valid_sites]
|
||||||
|
if invalid_sites:
|
||||||
|
await ctx.send(
|
||||||
|
f"Invalid sites: {', '.join(invalid_sites)}\nValid sites: {', '.join(valid_sites)}"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
await self.config.update_setting(ctx.guild.id, "enabled_sites", sites)
|
||||||
|
await ctx.send(f"Enabled sites: {', '.join(sites)}")
|
||||||
|
|
||||||
|
@videoarchiver.command(name="listsites")
|
||||||
|
async def list_sites(self, ctx: commands.Context):
|
||||||
|
"""List all available sites and currently enabled sites"""
|
||||||
|
enabled_sites = await self.config.get_setting(ctx.guild.id, "enabled_sites")
|
||||||
|
|
||||||
|
embed = discord.Embed(
|
||||||
|
title="Video Sites Configuration", color=discord.Color.blue()
|
||||||
|
)
|
||||||
|
|
||||||
|
with yt_dlp.YoutubeDL() as ydl:
|
||||||
|
all_sites = sorted(ie.IE_NAME for ie in ydl._ies if ie.IE_NAME is not None)
|
||||||
|
|
||||||
|
# Split sites into chunks for Discord's field value limit
|
||||||
|
chunk_size = 20
|
||||||
|
site_chunks = [
|
||||||
|
all_sites[i : i + chunk_size] for i in range(0, len(all_sites), chunk_size)
|
||||||
|
]
|
||||||
|
|
||||||
|
for i, chunk in enumerate(site_chunks, 1):
|
||||||
|
embed.add_field(
|
||||||
|
name=f"Available Sites ({i}/{len(site_chunks)})",
|
||||||
|
value=", ".join(chunk),
|
||||||
|
inline=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
embed.add_field(
|
||||||
|
name="Currently Enabled",
|
||||||
|
value=", ".join(enabled_sites) if enabled_sites else "All sites",
|
||||||
|
inline=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
await ctx.send(embed=embed)
|
||||||
|
|
||||||
|
@videoarchiver.command(name="queue")
|
||||||
|
@commands.admin_or_permissions(administrator=True)
|
||||||
|
async def show_queue(self, ctx: commands.Context):
|
||||||
|
"""Show current queue status with basic metrics"""
|
||||||
|
status = self.processor.queue_manager.get_queue_status(ctx.guild.id)
|
||||||
|
|
||||||
|
embed = discord.Embed(
|
||||||
|
title="Video Processing Queue Status",
|
||||||
|
color=discord.Color.blue(),
|
||||||
|
timestamp=datetime.utcnow()
|
||||||
|
)
|
||||||
|
|
||||||
|
# Queue Status
|
||||||
|
embed.add_field(
|
||||||
|
name="Queue Status",
|
||||||
|
value=(
|
||||||
|
f"📥 Pending: {status['pending']}\n"
|
||||||
|
f"⚙️ Processing: {status['processing']}\n"
|
||||||
|
f"✅ Completed: {status['completed']}\n"
|
||||||
|
f"❌ Failed: {status['failed']}"
|
||||||
|
),
|
||||||
|
inline=False
|
||||||
|
)
|
||||||
|
|
||||||
|
# Basic Metrics
|
||||||
|
metrics = status['metrics']
|
||||||
|
embed.add_field(
|
||||||
|
name="Basic Metrics",
|
||||||
|
value=(
|
||||||
|
f"Success Rate: {metrics['success_rate']:.1%}\n"
|
||||||
|
f"Avg Processing Time: {metrics['avg_processing_time']:.1f}s"
|
||||||
|
),
|
||||||
|
inline=False
|
||||||
|
)
|
||||||
|
|
||||||
|
embed.set_footer(text="Use [p]va queuemetrics for detailed performance metrics")
|
||||||
|
await ctx.send(embed=embed)
|
||||||
|
|
||||||
|
@videoarchiver.command(name="queuemetrics")
|
||||||
|
@commands.admin_or_permissions(administrator=True)
|
||||||
|
async def show_queue_metrics(self, ctx: commands.Context):
|
||||||
|
"""Show detailed queue performance metrics"""
|
||||||
|
status = self.processor.queue_manager.get_queue_status(ctx.guild.id)
|
||||||
|
metrics = status['metrics']
|
||||||
|
|
||||||
|
embed = discord.Embed(
|
||||||
|
title="Queue Performance Metrics",
|
||||||
|
color=discord.Color.blue(),
|
||||||
|
timestamp=datetime.utcnow()
|
||||||
|
)
|
||||||
|
|
||||||
|
# Processing Statistics
|
||||||
|
embed.add_field(
|
||||||
|
name="Processing Statistics",
|
||||||
|
value=(
|
||||||
|
f"Total Processed: {metrics['total_processed']}\n"
|
||||||
|
f"Total Failed: {metrics['total_failed']}\n"
|
||||||
|
f"Success Rate: {metrics['success_rate']:.1%}\n"
|
||||||
|
f"Avg Processing Time: {metrics['avg_processing_time']:.1f}s"
|
||||||
|
),
|
||||||
|
inline=False
|
||||||
|
)
|
||||||
|
|
||||||
|
# Resource Usage
|
||||||
|
embed.add_field(
|
||||||
|
name="Resource Usage",
|
||||||
|
value=(
|
||||||
|
f"Peak Memory Usage: {metrics['peak_memory_usage']:.1f}MB\n"
|
||||||
|
f"Last Cleanup: {metrics['last_cleanup']}"
|
||||||
|
),
|
||||||
|
inline=False
|
||||||
|
)
|
||||||
|
|
||||||
|
# Current Queue State
|
||||||
|
embed.add_field(
|
||||||
|
name="Current Queue State",
|
||||||
|
value=(
|
||||||
|
f"📥 Pending: {status['pending']}\n"
|
||||||
|
f"⚙️ Processing: {status['processing']}\n"
|
||||||
|
f"✅ Completed: {status['completed']}\n"
|
||||||
|
f"❌ Failed: {status['failed']}"
|
||||||
|
),
|
||||||
|
inline=False
|
||||||
|
)
|
||||||
|
|
||||||
|
embed.set_footer(text="Metrics are updated in real-time as videos are processed")
|
||||||
|
await ctx.send(embed=embed)
|
||||||
|
|
||||||
|
@videoarchiver.command(name="clearqueue")
|
||||||
|
@commands.admin_or_permissions(administrator=True)
|
||||||
|
async def clear_queue(self, ctx: commands.Context):
|
||||||
|
"""Clear the video processing queue for this guild"""
|
||||||
|
cleared = await self.processor.queue_manager.clear_guild_queue(ctx.guild.id)
|
||||||
|
await ctx.send(f"Cleared {cleared} items from the queue")
|
||||||
348
videoarchiver/config_manager.py
Normal file
348
videoarchiver/config_manager.py
Normal file
@@ -0,0 +1,348 @@
|
|||||||
|
"""Configuration management for VideoArchiver"""
|
||||||
|
from redbot.core import Config
|
||||||
|
from typing import Dict, Any, Optional, List, Union, cast
|
||||||
|
import discord
|
||||||
|
import logging
|
||||||
|
from datetime import datetime
|
||||||
|
import asyncio
|
||||||
|
from .exceptions import ConfigError, DiscordAPIError
|
||||||
|
|
||||||
|
logger = logging.getLogger('VideoArchiver')
|
||||||
|
|
||||||
|
class ConfigManager:
|
||||||
|
"""Manages guild configurations for VideoArchiver"""
|
||||||
|
|
||||||
|
default_guild = {
|
||||||
|
"archive_channel": None,
|
||||||
|
"notification_channel": None,
|
||||||
|
"log_channel": None,
|
||||||
|
"monitored_channels": [],
|
||||||
|
"allowed_roles": [],
|
||||||
|
"video_format": "mp4",
|
||||||
|
"video_quality": 1080,
|
||||||
|
"max_file_size": 8,
|
||||||
|
"delete_after_repost": True,
|
||||||
|
"message_duration": 24,
|
||||||
|
"message_template": "Video from {username} in #{channel}\nOriginal: {original_message}",
|
||||||
|
"enabled_sites": [],
|
||||||
|
"concurrent_downloads": 3,
|
||||||
|
"disable_update_check": False,
|
||||||
|
"last_update_check": None,
|
||||||
|
"max_retries": 3,
|
||||||
|
"retry_delay": 5,
|
||||||
|
"discord_retry_attempts": 3,
|
||||||
|
"discord_retry_delay": 5,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Valid settings constraints
|
||||||
|
VALID_VIDEO_FORMATS = ["mp4", "webm", "mkv"]
|
||||||
|
MAX_QUALITY_RANGE = (144, 4320) # 144p to 4K
|
||||||
|
MAX_FILE_SIZE_RANGE = (1, 100) # 1MB to 100MB
|
||||||
|
MAX_CONCURRENT_DOWNLOADS = 5
|
||||||
|
MAX_MESSAGE_DURATION = 168 # 1 week in hours
|
||||||
|
MAX_RETRIES = 10
|
||||||
|
MAX_RETRY_DELAY = 30
|
||||||
|
|
||||||
|
def __init__(self, bot_config: Config):
|
||||||
|
self.config = bot_config
|
||||||
|
self.config.register_guild(**self.default_guild)
|
||||||
|
self._config_locks: Dict[int, asyncio.Lock] = {}
|
||||||
|
|
||||||
|
async def _get_guild_lock(self, guild_id: int) -> asyncio.Lock:
|
||||||
|
"""Get or create a lock for guild-specific config operations"""
|
||||||
|
if guild_id not in self._config_locks:
|
||||||
|
self._config_locks[guild_id] = asyncio.Lock()
|
||||||
|
return self._config_locks[guild_id]
|
||||||
|
|
||||||
|
def _validate_setting(self, setting: str, value: Any) -> None:
|
||||||
|
"""Validate setting value against constraints"""
|
||||||
|
try:
|
||||||
|
if setting == "video_format" and value not in self.VALID_VIDEO_FORMATS:
|
||||||
|
raise ConfigError(f"Invalid video format. Must be one of: {', '.join(self.VALID_VIDEO_FORMATS)}")
|
||||||
|
|
||||||
|
elif setting == "video_quality":
|
||||||
|
if not isinstance(value, int) or not (self.MAX_QUALITY_RANGE[0] <= value <= self.MAX_QUALITY_RANGE[1]):
|
||||||
|
raise ConfigError(f"Video quality must be between {self.MAX_QUALITY_RANGE[0]} and {self.MAX_QUALITY_RANGE[1]}")
|
||||||
|
|
||||||
|
elif setting == "max_file_size":
|
||||||
|
if not isinstance(value, (int, float)) or not (self.MAX_FILE_SIZE_RANGE[0] <= value <= self.MAX_FILE_SIZE_RANGE[1]):
|
||||||
|
raise ConfigError(f"Max file size must be between {self.MAX_FILE_SIZE_RANGE[0]} and {self.MAX_FILE_SIZE_RANGE[1]} MB")
|
||||||
|
|
||||||
|
elif setting == "concurrent_downloads":
|
||||||
|
if not isinstance(value, int) or not (1 <= value <= self.MAX_CONCURRENT_DOWNLOADS):
|
||||||
|
raise ConfigError(f"Concurrent downloads must be between 1 and {self.MAX_CONCURRENT_DOWNLOADS}")
|
||||||
|
|
||||||
|
elif setting == "message_duration":
|
||||||
|
if not isinstance(value, int) or not (0 <= value <= self.MAX_MESSAGE_DURATION):
|
||||||
|
raise ConfigError(f"Message duration must be between 0 and {self.MAX_MESSAGE_DURATION} hours")
|
||||||
|
|
||||||
|
elif setting == "max_retries":
|
||||||
|
if not isinstance(value, int) or not (0 <= value <= self.MAX_RETRIES):
|
||||||
|
raise ConfigError(f"Max retries must be between 0 and {self.MAX_RETRIES}")
|
||||||
|
|
||||||
|
elif setting == "retry_delay":
|
||||||
|
if not isinstance(value, int) or not (1 <= value <= self.MAX_RETRY_DELAY):
|
||||||
|
raise ConfigError(f"Retry delay must be between 1 and {self.MAX_RETRY_DELAY} seconds")
|
||||||
|
|
||||||
|
elif setting in ["message_template"] and not isinstance(value, str):
|
||||||
|
raise ConfigError("Message template must be a string")
|
||||||
|
|
||||||
|
elif setting in ["delete_after_repost", "disable_update_check"] and not isinstance(value, bool):
|
||||||
|
raise ConfigError(f"{setting} must be a boolean")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
raise ConfigError(f"Validation error for {setting}: {str(e)}")
|
||||||
|
|
||||||
|
async def get_guild_settings(self, guild_id: int) -> Dict[str, Any]:
|
||||||
|
"""Get all settings for a guild with error handling"""
|
||||||
|
try:
|
||||||
|
async with await self._get_guild_lock(guild_id):
|
||||||
|
return await self.config.guild_from_id(guild_id).all()
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to get guild settings for {guild_id}: {str(e)}")
|
||||||
|
raise ConfigError(f"Failed to get guild settings: {str(e)}")
|
||||||
|
|
||||||
|
async def update_setting(self, guild_id: int, setting: str, value: Any) -> None:
|
||||||
|
"""Update a specific setting for a guild with validation"""
|
||||||
|
try:
|
||||||
|
if setting not in self.default_guild:
|
||||||
|
raise ConfigError(f"Invalid setting: {setting}")
|
||||||
|
|
||||||
|
self._validate_setting(setting, value)
|
||||||
|
|
||||||
|
async with await self._get_guild_lock(guild_id):
|
||||||
|
await self.config.guild_from_id(guild_id).set_raw(setting, value=value)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to update setting {setting} for guild {guild_id}: {str(e)}")
|
||||||
|
raise ConfigError(f"Failed to update setting: {str(e)}")
|
||||||
|
|
||||||
|
async def get_setting(self, guild_id: int, setting: str) -> Any:
|
||||||
|
"""Get a specific setting for a guild with error handling"""
|
||||||
|
try:
|
||||||
|
if setting not in self.default_guild:
|
||||||
|
raise ConfigError(f"Invalid setting: {setting}")
|
||||||
|
|
||||||
|
async with await self._get_guild_lock(guild_id):
|
||||||
|
return await self.config.guild_from_id(guild_id).get_raw(setting)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to get setting {setting} for guild {guild_id}: {str(e)}")
|
||||||
|
raise ConfigError(f"Failed to get setting: {str(e)}")
|
||||||
|
|
||||||
|
async def toggle_setting(self, guild_id: int, setting: str) -> bool:
|
||||||
|
"""Toggle a boolean setting for a guild with validation"""
|
||||||
|
try:
|
||||||
|
if setting not in self.default_guild:
|
||||||
|
raise ConfigError(f"Invalid setting: {setting}")
|
||||||
|
|
||||||
|
async with await self._get_guild_lock(guild_id):
|
||||||
|
current = await self.get_setting(guild_id, setting)
|
||||||
|
if not isinstance(current, bool):
|
||||||
|
raise ConfigError(f"Setting {setting} is not a boolean")
|
||||||
|
|
||||||
|
await self.update_setting(guild_id, setting, not current)
|
||||||
|
return not current
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to toggle setting {setting} for guild {guild_id}: {str(e)}")
|
||||||
|
raise ConfigError(f"Failed to toggle setting: {str(e)}")
|
||||||
|
|
||||||
|
async def add_to_list(self, guild_id: int, setting: str, value: Any) -> None:
|
||||||
|
"""Add a value to a list setting with validation"""
|
||||||
|
try:
|
||||||
|
if setting not in self.default_guild:
|
||||||
|
raise ConfigError(f"Invalid setting: {setting}")
|
||||||
|
|
||||||
|
async with await self._get_guild_lock(guild_id):
|
||||||
|
async with self.config.guild_from_id(guild_id).get_attr(setting)() as items:
|
||||||
|
if not isinstance(items, list):
|
||||||
|
raise ConfigError(f"Setting {setting} is not a list")
|
||||||
|
if value not in items:
|
||||||
|
items.append(value)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to add to list {setting} for guild {guild_id}: {str(e)}")
|
||||||
|
raise ConfigError(f"Failed to add to list: {str(e)}")
|
||||||
|
|
||||||
|
async def remove_from_list(self, guild_id: int, setting: str, value: Any) -> None:
|
||||||
|
"""Remove a value from a list setting with validation"""
|
||||||
|
try:
|
||||||
|
if setting not in self.default_guild:
|
||||||
|
raise ConfigError(f"Invalid setting: {setting}")
|
||||||
|
|
||||||
|
async with await self._get_guild_lock(guild_id):
|
||||||
|
async with self.config.guild_from_id(guild_id).get_attr(setting)() as items:
|
||||||
|
if not isinstance(items, list):
|
||||||
|
raise ConfigError(f"Setting {setting} is not a list")
|
||||||
|
if value in items:
|
||||||
|
items.remove(value)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to remove from list {setting} for guild {guild_id}: {str(e)}")
|
||||||
|
raise ConfigError(f"Failed to remove from list: {str(e)}")
|
||||||
|
|
||||||
|
async def get_channel(self, guild: discord.Guild, channel_type: str) -> Optional[discord.TextChannel]:
|
||||||
|
"""Get a channel by type with error handling and validation"""
|
||||||
|
try:
|
||||||
|
if channel_type not in ["archive", "notification", "log"]:
|
||||||
|
raise ConfigError(f"Invalid channel type: {channel_type}")
|
||||||
|
|
||||||
|
settings = await self.get_guild_settings(guild.id)
|
||||||
|
channel_id = settings.get(f"{channel_type}_channel")
|
||||||
|
|
||||||
|
if channel_id is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
channel = guild.get_channel(channel_id)
|
||||||
|
if channel is None:
|
||||||
|
logger.warning(f"Channel {channel_id} not found in guild {guild.id}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
if not isinstance(channel, discord.TextChannel):
|
||||||
|
raise DiscordAPIError(f"Channel {channel_id} is not a text channel")
|
||||||
|
|
||||||
|
return channel
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to get {channel_type} channel for guild {guild.id}: {str(e)}")
|
||||||
|
raise ConfigError(f"Failed to get channel: {str(e)}")
|
||||||
|
|
||||||
|
async def check_user_roles(self, member: discord.Member) -> bool:
|
||||||
|
"""Check if user has permission based on allowed roles with error handling"""
|
||||||
|
try:
|
||||||
|
allowed_roles = await self.get_setting(member.guild.id, "allowed_roles")
|
||||||
|
if not allowed_roles:
|
||||||
|
return True
|
||||||
|
return any(role.id in allowed_roles for role in member.roles)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to check roles for user {member.id} in guild {member.guild.id}: {str(e)}")
|
||||||
|
raise ConfigError(f"Failed to check user roles: {str(e)}")
|
||||||
|
|
||||||
|
async def get_monitored_channels(self, guild: discord.Guild) -> List[discord.TextChannel]:
|
||||||
|
"""Get all monitored channels for a guild with validation"""
|
||||||
|
try:
|
||||||
|
settings = await self.get_guild_settings(guild.id)
|
||||||
|
channels: List[discord.TextChannel] = []
|
||||||
|
|
||||||
|
for channel_id in settings["monitored_channels"]:
|
||||||
|
channel = guild.get_channel(channel_id)
|
||||||
|
if channel and isinstance(channel, discord.TextChannel):
|
||||||
|
channels.append(channel)
|
||||||
|
else:
|
||||||
|
logger.warning(f"Invalid monitored channel {channel_id} in guild {guild.id}")
|
||||||
|
|
||||||
|
return channels
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to get monitored channels for guild {guild.id}: {str(e)}")
|
||||||
|
raise ConfigError(f"Failed to get monitored channels: {str(e)}")
|
||||||
|
|
||||||
|
async def format_settings_embed(self, guild: discord.Guild) -> discord.Embed:
|
||||||
|
"""Format guild settings into a Discord embed with error handling"""
|
||||||
|
try:
|
||||||
|
settings = await self.get_guild_settings(guild.id)
|
||||||
|
embed = discord.Embed(
|
||||||
|
title="Video Archiver Settings",
|
||||||
|
color=discord.Color.blue(),
|
||||||
|
timestamp=datetime.utcnow()
|
||||||
|
)
|
||||||
|
|
||||||
|
# Get channels with error handling
|
||||||
|
archive_channel = guild.get_channel(settings["archive_channel"]) if settings["archive_channel"] else None
|
||||||
|
notification_channel = guild.get_channel(settings["notification_channel"]) if settings["notification_channel"] else None
|
||||||
|
log_channel = guild.get_channel(settings["log_channel"]) if settings["log_channel"] else None
|
||||||
|
|
||||||
|
# Get monitored channels and roles with validation
|
||||||
|
monitored_channels = []
|
||||||
|
for channel_id in settings["monitored_channels"]:
|
||||||
|
channel = guild.get_channel(channel_id)
|
||||||
|
if channel and isinstance(channel, discord.TextChannel):
|
||||||
|
monitored_channels.append(channel.mention)
|
||||||
|
|
||||||
|
allowed_roles = []
|
||||||
|
for role_id in settings["allowed_roles"]:
|
||||||
|
role = guild.get_role(role_id)
|
||||||
|
if role:
|
||||||
|
allowed_roles.append(role.name)
|
||||||
|
|
||||||
|
# Add fields with proper formatting
|
||||||
|
embed.add_field(
|
||||||
|
name="Archive Channel",
|
||||||
|
value=archive_channel.mention if archive_channel else "Not set",
|
||||||
|
inline=False
|
||||||
|
)
|
||||||
|
embed.add_field(
|
||||||
|
name="Notification Channel",
|
||||||
|
value=notification_channel.mention if notification_channel else "Same as archive",
|
||||||
|
inline=False
|
||||||
|
)
|
||||||
|
embed.add_field(
|
||||||
|
name="Log Channel",
|
||||||
|
value=log_channel.mention if log_channel else "Not set",
|
||||||
|
inline=False
|
||||||
|
)
|
||||||
|
embed.add_field(
|
||||||
|
name="Monitored Channels",
|
||||||
|
value="\n".join(monitored_channels) if monitored_channels else "None",
|
||||||
|
inline=False
|
||||||
|
)
|
||||||
|
embed.add_field(
|
||||||
|
name="Allowed Roles",
|
||||||
|
value=", ".join(allowed_roles) if allowed_roles else "All roles (no restrictions)",
|
||||||
|
inline=False
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add other settings with validation
|
||||||
|
embed.add_field(
|
||||||
|
name="Video Format",
|
||||||
|
value=settings["video_format"],
|
||||||
|
inline=True
|
||||||
|
)
|
||||||
|
embed.add_field(
|
||||||
|
name="Max Quality",
|
||||||
|
value=f"{settings['video_quality']}p",
|
||||||
|
inline=True
|
||||||
|
)
|
||||||
|
embed.add_field(
|
||||||
|
name="Max File Size",
|
||||||
|
value=f"{settings['max_file_size']}MB",
|
||||||
|
inline=True
|
||||||
|
)
|
||||||
|
embed.add_field(
|
||||||
|
name="Delete After Repost",
|
||||||
|
value=str(settings["delete_after_repost"]),
|
||||||
|
inline=True
|
||||||
|
)
|
||||||
|
embed.add_field(
|
||||||
|
name="Message Duration",
|
||||||
|
value=f"{settings['message_duration']} hours",
|
||||||
|
inline=True
|
||||||
|
)
|
||||||
|
embed.add_field(
|
||||||
|
name="Concurrent Downloads",
|
||||||
|
value=str(settings["concurrent_downloads"]),
|
||||||
|
inline=True
|
||||||
|
)
|
||||||
|
embed.add_field(
|
||||||
|
name="Update Check Disabled",
|
||||||
|
value=str(settings["disable_update_check"]),
|
||||||
|
inline=True
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add enabled sites with validation
|
||||||
|
embed.add_field(
|
||||||
|
name="Enabled Sites",
|
||||||
|
value=", ".join(settings["enabled_sites"]) if settings["enabled_sites"] else "All sites",
|
||||||
|
inline=False
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add footer with last update time
|
||||||
|
embed.set_footer(text="Last updated")
|
||||||
|
|
||||||
|
return embed
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to format settings embed for guild {guild.id}: {str(e)}")
|
||||||
|
raise ConfigError(f"Failed to format settings: {str(e)}")
|
||||||
488
videoarchiver/enhanced_queue.py
Normal file
488
videoarchiver/enhanced_queue.py
Normal file
@@ -0,0 +1,488 @@
|
|||||||
|
"""Enhanced queue system for VideoArchiver with improved memory management and performance"""
|
||||||
|
import asyncio
|
||||||
|
import logging
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
import psutil
|
||||||
|
from typing import Dict, Optional, Set, Tuple, Callable, Any, List, Union
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
import traceback
|
||||||
|
from dataclasses import dataclass, asdict, field
|
||||||
|
import weakref
|
||||||
|
from pathlib import Path
|
||||||
|
import aiofiles
|
||||||
|
import aiofiles.os
|
||||||
|
import sys
|
||||||
|
import signal
|
||||||
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
|
from functools import partial
|
||||||
|
import tempfile
|
||||||
|
import shutil
|
||||||
|
from .exceptions import (
|
||||||
|
QueueError,
|
||||||
|
ResourceExhaustedError,
|
||||||
|
ProcessingError,
|
||||||
|
CleanupError,
|
||||||
|
FileOperationError
|
||||||
|
)
|
||||||
|
|
||||||
|
# Configure logging with proper format
|
||||||
|
logging.basicConfig(
|
||||||
|
level=logging.INFO,
|
||||||
|
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||||
|
)
|
||||||
|
logger = logging.getLogger('EnhancedQueueManager')
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class QueueItem:
|
||||||
|
"""Represents a video processing task in the queue"""
|
||||||
|
url: str
|
||||||
|
message_id: int
|
||||||
|
channel_id: int
|
||||||
|
guild_id: int
|
||||||
|
author_id: int
|
||||||
|
added_at: datetime
|
||||||
|
priority: int = 0 # Higher number = higher priority
|
||||||
|
status: str = "pending" # pending, processing, completed, failed
|
||||||
|
error: Optional[str] = None
|
||||||
|
attempt: int = 0
|
||||||
|
processing_time: float = 0.0
|
||||||
|
size_bytes: int = 0
|
||||||
|
last_error: Optional[str] = None
|
||||||
|
retry_count: int = 0
|
||||||
|
last_retry: Optional[datetime] = None
|
||||||
|
self.processing_times: List[float] = []
|
||||||
|
self.last_error: Optional[str] = None
|
||||||
|
self.last_error_time: Optional[datetime] = None
|
||||||
|
|
||||||
|
def update_metrics(self, processing_time: float, success: bool, error: str = None):
|
||||||
|
"""Update metrics with new processing information"""
|
||||||
|
self.total_processed += 1
|
||||||
|
if not success:
|
||||||
|
self.total_failed += 1
|
||||||
|
if error:
|
||||||
|
self.last_error = error
|
||||||
|
self.last_error_time = datetime.utcnow()
|
||||||
|
error_type = error.split(':')[0] if ':' in error else error
|
||||||
|
self.errors_by_type[error_type] = self.errors_by_type.get(error_type, 0) + 1
|
||||||
|
|
||||||
|
# Update processing times with sliding window
|
||||||
|
self.processing_times.append(processing_time)
|
||||||
|
if len(self.processing_times) > 100: # Keep last 100 processing times
|
||||||
|
self.processing_times.pop(0)
|
||||||
|
|
||||||
|
# Update average processing time
|
||||||
|
self.avg_processing_time = sum(self.processing_times) / len(self.processing_times)
|
||||||
|
|
||||||
|
# Update success rate
|
||||||
|
self.success_rate = (
|
||||||
|
(self.total_processed - self.total_failed) / self.total_processed
|
||||||
|
if self.total_processed > 0 else 0.0
|
||||||
|
)
|
||||||
|
|
||||||
|
# Update peak memory usage
|
||||||
|
current_memory = psutil.Process().memory_info().rss / 1024 / 1024 # MB
|
||||||
|
self.peak_memory_usage = max(self.peak_memory_usage, current_memory)
|
||||||
|
|
||||||
|
class EnhancedVideoQueueManager:
|
||||||
|
"""Enhanced queue manager with improved memory management and performance"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
max_retries: int = 3,
|
||||||
|
retry_delay: int = 5,
|
||||||
|
max_queue_size: int = 1000,
|
||||||
|
cleanup_interval: int = 3600, # 1 hour
|
||||||
|
max_history_age: int = 86400, # 24 hours
|
||||||
|
persistence_path: Optional[str] = None,
|
||||||
|
backup_interval: int = 300 # 5 minutes
|
||||||
|
):
|
||||||
|
self.max_retries = max_retries
|
||||||
|
self.retry_delay = retry_delay
|
||||||
|
self.max_queue_size = max_queue_size
|
||||||
|
self.cleanup_interval = cleanup_interval
|
||||||
|
self.max_history_age = max_history_age
|
||||||
|
self.persistence_path = persistence_path
|
||||||
|
self.backup_interval = backup_interval
|
||||||
|
|
||||||
|
# Queue storage with priority
|
||||||
|
self._queue: List[QueueItem] = []
|
||||||
|
self._queue_lock = asyncio.Lock()
|
||||||
|
self._processing: Dict[str, QueueItem] = {}
|
||||||
|
self._completed: Dict[str, QueueItem] = {}
|
||||||
|
self._failed: Dict[str, QueueItem] = {}
|
||||||
|
|
||||||
|
# Track active tasks
|
||||||
|
self._active_tasks: Set[asyncio.Task] = set()
|
||||||
|
self._processing_lock = asyncio.Lock()
|
||||||
|
|
||||||
|
# Status tracking
|
||||||
|
self._guild_queues: Dict[int, Set[str]] = {}
|
||||||
|
self._channel_queues: Dict[int, Set[str]] = {}
|
||||||
|
|
||||||
|
# Metrics tracking
|
||||||
|
self.metrics = QueueMetrics()
|
||||||
|
|
||||||
|
# Recovery tracking
|
||||||
|
self._recovery_attempts: Dict[str, int] = {}
|
||||||
|
self._last_backup: Optional[datetime] = None
|
||||||
|
|
||||||
|
# Initialize tasks
|
||||||
|
self._init_tasks()
|
||||||
|
|
||||||
|
def _init_tasks(self):
|
||||||
|
"""Initialize background tasks"""
|
||||||
|
# Cleanup and monitoring
|
||||||
|
self._cleanup_task = asyncio.create_task(self._periodic_cleanup())
|
||||||
|
self._active_tasks.add(self._cleanup_task)
|
||||||
|
|
||||||
|
# Health monitoring
|
||||||
|
self._health_check_task = asyncio.create_task(self._monitor_health())
|
||||||
|
self._active_tasks.add(self._health_check_task)
|
||||||
|
|
||||||
|
# Backup task
|
||||||
|
if self.persistence_path:
|
||||||
|
self._backup_task = asyncio.create_task(self._periodic_backup())
|
||||||
|
self._active_tasks.add(self._backup_task)
|
||||||
|
|
||||||
|
# Load persisted queue
|
||||||
|
self._load_persisted_queue()
|
||||||
|
|
||||||
|
async def add_to_queue(
|
||||||
|
self,
|
||||||
|
url: str,
|
||||||
|
message_id: int,
|
||||||
|
channel_id: int,
|
||||||
|
guild_id: int,
|
||||||
|
author_id: int,
|
||||||
|
callback: Callable[[str, bool, str], Any],
|
||||||
|
priority: int = 0
|
||||||
|
) -> bool:
|
||||||
|
"""Add a video to the processing queue with priority support"""
|
||||||
|
try:
|
||||||
|
async with self._queue_lock:
|
||||||
|
if len(self._queue) >= self.max_queue_size:
|
||||||
|
raise QueueError("Queue is full")
|
||||||
|
|
||||||
|
# Check system resources
|
||||||
|
if psutil.virtual_memory().percent > 90:
|
||||||
|
raise ResourceExhaustedError("System memory is critically low")
|
||||||
|
|
||||||
|
# Create queue item
|
||||||
|
item = QueueItem(
|
||||||
|
url=url,
|
||||||
|
message_id=message_id,
|
||||||
|
channel_id=channel_id,
|
||||||
|
guild_id=guild_id,
|
||||||
|
author_id=author_id,
|
||||||
|
added_at=datetime.utcnow(),
|
||||||
|
priority=priority
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add to tracking collections
|
||||||
|
if guild_id not in self._guild_queues:
|
||||||
|
self._guild_queues[guild_id] = set()
|
||||||
|
self._guild_queues[guild_id].add(url)
|
||||||
|
|
||||||
|
if channel_id not in self._channel_queues:
|
||||||
|
self._channel_queues[channel_id] = set()
|
||||||
|
self._channel_queues[channel_id].add(url)
|
||||||
|
|
||||||
|
# Add to queue with priority
|
||||||
|
self._queue.append(item)
|
||||||
|
self._queue.sort(key=lambda x: (-x.priority, x.added_at))
|
||||||
|
|
||||||
|
# Persist queue state
|
||||||
|
if self.persistence_path:
|
||||||
|
await self._persist_queue()
|
||||||
|
|
||||||
|
logger.info(f"Added video to queue: {url} with priority {priority}")
|
||||||
|
return True
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error adding video to queue: {traceback.format_exc()}")
|
||||||
|
raise QueueError(f"Failed to add to queue: {str(e)}")
|
||||||
|
|
||||||
|
async def _periodic_backup(self):
|
||||||
|
"""Periodically backup queue state"""
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
if self.persistence_path and (
|
||||||
|
not self._last_backup
|
||||||
|
or (datetime.utcnow() - self._last_backup).total_seconds() >= self.backup_interval
|
||||||
|
):
|
||||||
|
await self._persist_queue()
|
||||||
|
self._last_backup = datetime.utcnow()
|
||||||
|
await asyncio.sleep(self.backup_interval)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error in periodic backup: {str(e)}")
|
||||||
|
await asyncio.sleep(60)
|
||||||
|
|
||||||
|
async def _persist_queue(self):
|
||||||
|
"""Persist queue state to disk with improved error handling"""
|
||||||
|
if not self.persistence_path:
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
state = {
|
||||||
|
"queue": [asdict(item) for item in self._queue],
|
||||||
|
"processing": {k: asdict(v) for k, v in self._processing.items()},
|
||||||
|
"completed": {k: asdict(v) for k, v in self._completed.items()},
|
||||||
|
"failed": {k: asdict(v) for k, v in self._failed.items()},
|
||||||
|
"metrics": {
|
||||||
|
"total_processed": self.metrics.total_processed,
|
||||||
|
"total_failed": self.metrics.total_failed,
|
||||||
|
"avg_processing_time": self.metrics.avg_processing_time,
|
||||||
|
"success_rate": self.metrics.success_rate,
|
||||||
|
"errors_by_type": self.metrics.errors_by_type,
|
||||||
|
"last_error": self.metrics.last_error,
|
||||||
|
"last_error_time": self.metrics.last_error_time.isoformat() if self.metrics.last_error_time else None
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Ensure directory exists
|
||||||
|
os.makedirs(os.path.dirname(self.persistence_path), exist_ok=True)
|
||||||
|
|
||||||
|
# Write to temp file first
|
||||||
|
temp_path = f"{self.persistence_path}.tmp"
|
||||||
|
async with aiofiles.open(temp_path, 'w') as f:
|
||||||
|
await f.write(json.dumps(state, default=str))
|
||||||
|
await f.flush()
|
||||||
|
os.fsync(f.fileno())
|
||||||
|
|
||||||
|
# Atomic rename
|
||||||
|
await aiofiles.os.rename(temp_path, self.persistence_path)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error persisting queue state: {traceback.format_exc()}")
|
||||||
|
raise QueueError(f"Failed to persist queue state: {str(e)}")
|
||||||
|
|
||||||
|
def _load_persisted_queue(self):
|
||||||
|
"""Load persisted queue state from disk with improved error handling"""
|
||||||
|
if not self.persistence_path or not os.path.exists(self.persistence_path):
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(self.persistence_path, 'r') as f:
|
||||||
|
state = json.load(f)
|
||||||
|
|
||||||
|
# Restore queue items with datetime conversion
|
||||||
|
self._queue = []
|
||||||
|
for item in state["queue"]:
|
||||||
|
item["added_at"] = datetime.fromisoformat(item["added_at"])
|
||||||
|
if item.get("last_retry"):
|
||||||
|
item["last_retry"] = datetime.fromisoformat(item["last_retry"])
|
||||||
|
self._queue.append(QueueItem(**item))
|
||||||
|
|
||||||
|
self._processing = {k: QueueItem(**v) for k, v in state["processing"].items()}
|
||||||
|
self._completed = {k: QueueItem(**v) for k, v in state["completed"].items()}
|
||||||
|
self._failed = {k: QueueItem(**v) for k, v in state["failed"].items()}
|
||||||
|
|
||||||
|
# Restore metrics
|
||||||
|
self.metrics.total_processed = state["metrics"]["total_processed"]
|
||||||
|
self.metrics.total_failed = state["metrics"]["total_failed"]
|
||||||
|
self.metrics.avg_processing_time = state["metrics"]["avg_processing_time"]
|
||||||
|
self.metrics.success_rate = state["metrics"]["success_rate"]
|
||||||
|
self.metrics.errors_by_type = state["metrics"]["errors_by_type"]
|
||||||
|
self.metrics.last_error = state["metrics"]["last_error"]
|
||||||
|
if state["metrics"]["last_error_time"]:
|
||||||
|
self.metrics.last_error_time = datetime.fromisoformat(state["metrics"]["last_error_time"])
|
||||||
|
|
||||||
|
logger.info("Successfully loaded persisted queue state")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error loading persisted queue state: {traceback.format_exc()}")
|
||||||
|
# Create backup of corrupted state file
|
||||||
|
if os.path.exists(self.persistence_path):
|
||||||
|
backup_path = f"{self.persistence_path}.bak.{int(time.time())}"
|
||||||
|
try:
|
||||||
|
os.rename(self.persistence_path, backup_path)
|
||||||
|
logger.info(f"Created backup of corrupted state file: {backup_path}")
|
||||||
|
except Exception as be:
|
||||||
|
logger.error(f"Failed to create backup of corrupted state file: {str(be)}")
|
||||||
|
|
||||||
|
async def _monitor_health(self):
|
||||||
|
"""Monitor queue health and performance with improved metrics"""
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
# Check memory usage
|
||||||
|
process = psutil.Process()
|
||||||
|
memory_usage = process.memory_info().rss / 1024 / 1024 # MB
|
||||||
|
|
||||||
|
if memory_usage > 1024: # 1GB
|
||||||
|
logger.warning(f"High memory usage detected: {memory_usage:.2f}MB")
|
||||||
|
# Force garbage collection
|
||||||
|
import gc
|
||||||
|
gc.collect()
|
||||||
|
|
||||||
|
# Check for potential deadlocks
|
||||||
|
processing_times = [
|
||||||
|
time.time() - item.processing_time
|
||||||
|
for item in self._processing.values()
|
||||||
|
if item.processing_time > 0
|
||||||
|
]
|
||||||
|
|
||||||
|
if processing_times:
|
||||||
|
max_time = max(processing_times)
|
||||||
|
if max_time > 3600: # 1 hour
|
||||||
|
logger.warning(f"Potential deadlock detected: Item processing for {max_time:.2f}s")
|
||||||
|
# Attempt recovery
|
||||||
|
await self._recover_stuck_items()
|
||||||
|
|
||||||
|
# Calculate and log detailed metrics
|
||||||
|
success_rate = self.metrics.success_rate
|
||||||
|
error_distribution = self.metrics.errors_by_type
|
||||||
|
avg_processing_time = self.metrics.avg_processing_time
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
f"Queue Health Metrics:\n"
|
||||||
|
f"- Success Rate: {success_rate:.2%}\n"
|
||||||
|
f"- Avg Processing Time: {avg_processing_time:.2f}s\n"
|
||||||
|
f"- Memory Usage: {memory_usage:.2f}MB\n"
|
||||||
|
f"- Error Distribution: {error_distribution}\n"
|
||||||
|
f"- Queue Size: {len(self._queue)}\n"
|
||||||
|
f"- Processing Items: {len(self._processing)}"
|
||||||
|
)
|
||||||
|
|
||||||
|
await asyncio.sleep(300) # Check every 5 minutes
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error in health monitor: {traceback.format_exc()}")
|
||||||
|
await asyncio.sleep(60)
|
||||||
|
|
||||||
|
async def _recover_stuck_items(self):
|
||||||
|
"""Attempt to recover stuck items in the processing queue"""
|
||||||
|
try:
|
||||||
|
async with self._processing_lock:
|
||||||
|
current_time = time.time()
|
||||||
|
for url, item in list(self._processing.items()):
|
||||||
|
if item.processing_time > 0 and (current_time - item.processing_time) > 3600:
|
||||||
|
# Move to failed queue if max retries reached
|
||||||
|
if item.retry_count >= self.max_retries:
|
||||||
|
self._failed[url] = item
|
||||||
|
self._processing.pop(url)
|
||||||
|
logger.warning(f"Moved stuck item to failed queue: {url}")
|
||||||
|
else:
|
||||||
|
# Increment retry count and reset for reprocessing
|
||||||
|
item.retry_count += 1
|
||||||
|
item.processing_time = 0
|
||||||
|
item.last_retry = datetime.utcnow()
|
||||||
|
item.status = "pending"
|
||||||
|
self._queue.append(item)
|
||||||
|
self._processing.pop(url)
|
||||||
|
logger.info(f"Recovered stuck item for retry: {url}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error recovering stuck items: {str(e)}")
|
||||||
|
|
||||||
|
async def cleanup(self):
|
||||||
|
"""Clean up resources and stop queue processing"""
|
||||||
|
try:
|
||||||
|
# Cancel all monitoring tasks
|
||||||
|
for task in self._active_tasks:
|
||||||
|
if not task.done():
|
||||||
|
task.cancel()
|
||||||
|
|
||||||
|
await asyncio.gather(*self._active_tasks, return_exceptions=True)
|
||||||
|
|
||||||
|
# Persist final state
|
||||||
|
if self.persistence_path:
|
||||||
|
await self._persist_queue()
|
||||||
|
|
||||||
|
# Clear all collections
|
||||||
|
self._queue.clear()
|
||||||
|
self._processing.clear()
|
||||||
|
self._completed.clear()
|
||||||
|
self._failed.clear()
|
||||||
|
self._guild_queues.clear()
|
||||||
|
self._channel_queues.clear()
|
||||||
|
|
||||||
|
logger.info("Queue manager cleanup completed")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error during cleanup: {str(e)}")
|
||||||
|
raise CleanupError(f"Failed to clean up queue manager: {str(e)}")
|
||||||
|
|
||||||
|
def get_queue_status(self, guild_id: Optional[int] = None) -> Dict[str, Any]:
|
||||||
|
"""Get detailed queue status with metrics"""
|
||||||
|
try:
|
||||||
|
if guild_id is not None:
|
||||||
|
guild_urls = self._guild_queues.get(guild_id, set())
|
||||||
|
status = {
|
||||||
|
"pending": sum(1 for item in self._queue if item.url in guild_urls),
|
||||||
|
"processing": sum(1 for url in self._processing if url in guild_urls),
|
||||||
|
"completed": sum(1 for url in self._completed if url in guild_urls),
|
||||||
|
"failed": sum(1 for url in self._failed if url in guild_urls)
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
status = {
|
||||||
|
"pending": len(self._queue),
|
||||||
|
"processing": len(self._processing),
|
||||||
|
"completed": len(self._completed),
|
||||||
|
"failed": len(self._failed)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add detailed metrics
|
||||||
|
status.update({
|
||||||
|
"metrics": {
|
||||||
|
"total_processed": self.metrics.total_processed,
|
||||||
|
"total_failed": self.metrics.total_failed,
|
||||||
|
"success_rate": self.metrics.success_rate,
|
||||||
|
"avg_processing_time": self.metrics.avg_processing_time,
|
||||||
|
"peak_memory_usage": self.metrics.peak_memory_usage,
|
||||||
|
"last_cleanup": self.metrics.last_cleanup.isoformat(),
|
||||||
|
"errors_by_type": self.metrics.errors_by_type,
|
||||||
|
"last_error": self.metrics.last_error,
|
||||||
|
"last_error_time": self.metrics.last_error_time.isoformat() if self.metrics.last_error_time else None,
|
||||||
|
"retries": self.metrics.retries
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
return status
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error getting queue status: {str(e)}")
|
||||||
|
raise QueueError(f"Failed to get queue status: {str(e)}")
|
||||||
|
|
||||||
|
async def _periodic_cleanup(self):
|
||||||
|
"""Periodically clean up old completed/failed items"""
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
current_time = datetime.utcnow()
|
||||||
|
cleanup_cutoff = current_time - timedelta(seconds=self.max_history_age)
|
||||||
|
|
||||||
|
async with self._queue_lock:
|
||||||
|
# Clean up completed items
|
||||||
|
for url in list(self._completed.keys()):
|
||||||
|
item = self._completed[url]
|
||||||
|
if item.added_at < cleanup_cutoff:
|
||||||
|
self._completed.pop(url)
|
||||||
|
|
||||||
|
# Clean up failed items
|
||||||
|
for url in list(self._failed.keys()):
|
||||||
|
item = self._failed[url]
|
||||||
|
if item.added_at < cleanup_cutoff:
|
||||||
|
self._failed.pop(url)
|
||||||
|
|
||||||
|
# Clean up guild and channel tracking
|
||||||
|
for guild_id in list(self._guild_queues.keys()):
|
||||||
|
self._guild_queues[guild_id] = {
|
||||||
|
url for url in self._guild_queues[guild_id]
|
||||||
|
if url in self._queue or url in self._processing
|
||||||
|
}
|
||||||
|
|
||||||
|
for channel_id in list(self._channel_queues.keys()):
|
||||||
|
self._channel_queues[channel_id] = {
|
||||||
|
url for url in self._channel_queues[channel_id]
|
||||||
|
if url in self._queue or url in self._processing
|
||||||
|
}
|
||||||
|
|
||||||
|
self.metrics.last_cleanup = current_time
|
||||||
|
logger.info("Completed periodic queue cleanup")
|
||||||
|
|
||||||
|
await asyncio.sleep(self.cleanup_interval)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error in periodic cleanup: {traceback.format_exc()}")
|
||||||
|
await asyncio.sleep(60)
|
||||||
64
videoarchiver/exceptions.py
Normal file
64
videoarchiver/exceptions.py
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
"""Custom exceptions for the VideoArchiver cog"""
|
||||||
|
|
||||||
|
class ProcessingError(Exception):
|
||||||
|
"""Base exception for video processing errors"""
|
||||||
|
def __init__(self, message: str, details: str = None):
|
||||||
|
self.message = message
|
||||||
|
self.details = details
|
||||||
|
super().__init__(self.message)
|
||||||
|
|
||||||
|
class DiscordAPIError(ProcessingError):
|
||||||
|
"""Raised when Discord API operations fail"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
class UpdateError(ProcessingError):
|
||||||
|
"""Raised when update operations fail"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
class DownloadError(ProcessingError):
|
||||||
|
"""Raised when video download operations fail"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
class QueueError(ProcessingError):
|
||||||
|
"""Raised when queue operations fail"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
class ConfigError(ProcessingError):
|
||||||
|
"""Raised when configuration operations fail"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
class FileOperationError(ProcessingError):
|
||||||
|
"""Raised when file operations fail"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
class VideoValidationError(ProcessingError):
|
||||||
|
"""Raised when video validation fails"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
class PermissionError(ProcessingError):
|
||||||
|
"""Raised when permission checks fail"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
class ResourceExhaustedError(ProcessingError):
|
||||||
|
"""Raised when system resources are exhausted"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
class NetworkError(ProcessingError):
|
||||||
|
"""Raised when network operations fail"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
class FFmpegError(ProcessingError):
|
||||||
|
"""Raised when FFmpeg operations fail"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
class CleanupError(ProcessingError):
|
||||||
|
"""Raised when cleanup operations fail"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
class URLExtractionError(ProcessingError):
|
||||||
|
"""Raised when URL extraction fails"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
class MessageFormatError(ProcessingError):
|
||||||
|
"""Raised when message formatting fails"""
|
||||||
|
pass
|
||||||
@@ -1,24 +1,55 @@
|
|||||||
{
|
{
|
||||||
"name": "VideoArchiver",
|
"name": "VideoArchiver",
|
||||||
"author": ["PacNPal"],
|
"short": "Archive videos from Discord channels",
|
||||||
"description": "A powerful Discord video archiver cog that automatically downloads and reposts videos from monitored channels. Features include:\n- GPU-accelerated video compression (NVIDIA, AMD, Intel)\n- Multi-core CPU utilization\n- Concurrent multi-video processing\n- Intelligent quality preservation\n- Support for multiple video sites\n- Customizable archive messages\n- Automatic cleanup\n- Automatic yt-dlp updates",
|
"description": "A cog to automatically archive videos posted in monitored Discord channels. Supports multiple video platforms, queue management, and hardware acceleration. Features include:\n- Automatic video detection and downloading\n- Support for multiple video platforms\n- Queue management with priority handling\n- Hardware-accelerated video processing\n- Configurable quality and format settings\n- Automatic cleanup of temporary files\n- Detailed error reporting and logging",
|
||||||
"short": "Archive videos from Discord channels with GPU-accelerated compression",
|
"end_user_data_statement": "This cog stores the following data:\n1. Guild-specific settings (channels, roles, preferences)\n2. Temporary video files during processing (automatically deleted)\n3. Message and channel IDs for tracking processed videos\n4. Queue state for video processing\n\nNo personal user data is permanently stored.",
|
||||||
"tags": [
|
"install_msg": "Thanks for installing VideoArchiver! Before using:\n1. Ensure FFmpeg is installed on your system\n2. Configure archive and monitored channels using `[p]videoarchiver`\n3. Use `[p]help VideoArchiver` to see all commands\n\nFor support or issues, please visit the repository.",
|
||||||
"video",
|
"author": [
|
||||||
"archive",
|
"Cline"
|
||||||
"download",
|
|
||||||
"compression",
|
|
||||||
"media"
|
|
||||||
],
|
],
|
||||||
|
"required_cogs": {},
|
||||||
"requirements": [
|
"requirements": [
|
||||||
"yt-dlp>=2024.11.4",
|
"yt-dlp>=2024.11.4",
|
||||||
"ffmpeg-python>=0.2.0",
|
"ffmpeg-python>=0.2.0",
|
||||||
"requests>=2.32.3",
|
"packaging>=23.0",
|
||||||
"setuptools>=65.5.1",
|
"aiohttp>=3.8.0",
|
||||||
"aiohttp>=3.9.1"
|
"psutil>=5.9.0"
|
||||||
|
],
|
||||||
|
"tags": [
|
||||||
|
"video",
|
||||||
|
"archive",
|
||||||
|
"media",
|
||||||
|
"youtube",
|
||||||
|
"download",
|
||||||
|
"automation",
|
||||||
|
"queue",
|
||||||
|
"ffmpeg"
|
||||||
],
|
],
|
||||||
"min_bot_version": "3.5.0",
|
"min_bot_version": "3.5.0",
|
||||||
|
"min_python_version": [3, 8, 0],
|
||||||
"hidden": false,
|
"hidden": false,
|
||||||
"disabled": false,
|
"disabled": false,
|
||||||
"type": "COG"
|
"type": "COG",
|
||||||
|
"permissions": [
|
||||||
|
"attach_files",
|
||||||
|
"embed_links",
|
||||||
|
"manage_messages",
|
||||||
|
"read_message_history"
|
||||||
|
],
|
||||||
|
"end_user_data_statement_required": true,
|
||||||
|
"required_system_packages": [
|
||||||
|
{
|
||||||
|
"linux": "ffmpeg",
|
||||||
|
"osx": "ffmpeg",
|
||||||
|
"windows": "ffmpeg"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"max_bot_version": "3.6.99",
|
||||||
|
"suggested_bot_permissions": [
|
||||||
|
"attach_files",
|
||||||
|
"embed_links",
|
||||||
|
"manage_messages",
|
||||||
|
"read_message_history",
|
||||||
|
"add_reactions"
|
||||||
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
324
videoarchiver/processor.py
Normal file
324
videoarchiver/processor.py
Normal file
@@ -0,0 +1,324 @@
|
|||||||
|
"""Video processing logic for VideoArchiver"""
|
||||||
|
import discord
|
||||||
|
import logging
|
||||||
|
import yt_dlp
|
||||||
|
import re
|
||||||
|
import os
|
||||||
|
from typing import List, Optional, Tuple, Callable, Any
|
||||||
|
import asyncio
|
||||||
|
import traceback
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from .utils import VideoDownloader, secure_delete_file, cleanup_downloads
|
||||||
|
from .exceptions import ProcessingError, DiscordAPIError
|
||||||
|
from .enhanced_queue import EnhancedVideoQueueManager
|
||||||
|
|
||||||
|
logger = logging.getLogger('VideoArchiver')
|
||||||
|
|
||||||
|
class VideoProcessor:
|
||||||
|
"""Handles video processing operations"""
|
||||||
|
|
||||||
|
def __init__(self, bot, config_manager, components):
|
||||||
|
self.bot = bot
|
||||||
|
self.config = config_manager
|
||||||
|
self.components = components
|
||||||
|
|
||||||
|
# Initialize enhanced queue manager with persistence and error recovery
|
||||||
|
queue_path = os.path.join(os.path.dirname(__file__), "data", "queue_state.json")
|
||||||
|
self.queue_manager = EnhancedVideoQueueManager(
|
||||||
|
max_retries=3,
|
||||||
|
retry_delay=5,
|
||||||
|
max_queue_size=1000,
|
||||||
|
cleanup_interval=1800, # 30 minutes (reduced from 1 hour for more frequent cleanup)
|
||||||
|
max_history_age=86400, # 24 hours
|
||||||
|
persistence_path=queue_path
|
||||||
|
)
|
||||||
|
|
||||||
|
# Track failed downloads for cleanup
|
||||||
|
self._failed_downloads = set()
|
||||||
|
self._failed_downloads_lock = asyncio.Lock()
|
||||||
|
|
||||||
|
async def process_video_url(self, url: str, message: discord.Message, priority: int = 0) -> bool:
|
||||||
|
"""Process a video URL: download, reupload, and cleanup"""
|
||||||
|
guild_id = message.guild.id
|
||||||
|
start_time = datetime.utcnow()
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Add initial reactions
|
||||||
|
await message.add_reaction("📹")
|
||||||
|
await message.add_reaction("⏳")
|
||||||
|
await self._log_message(message.guild, f"Processing video URL: {url}")
|
||||||
|
|
||||||
|
settings = await self.config.get_guild_settings(guild_id)
|
||||||
|
|
||||||
|
# Check user roles with detailed error message
|
||||||
|
if not await self.config.check_user_roles(message.author):
|
||||||
|
await message.remove_reaction("⏳", self.bot.user)
|
||||||
|
await message.add_reaction("🚫")
|
||||||
|
await self._log_message(
|
||||||
|
message.guild,
|
||||||
|
f"User {message.author} does not have required roles for video archiving",
|
||||||
|
"warning"
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Create callback for queue processing with enhanced error handling
|
||||||
|
async def process_callback(url: str, success: bool, error: str) -> bool:
|
||||||
|
file_path = None
|
||||||
|
try:
|
||||||
|
if not success:
|
||||||
|
await message.remove_reaction("⏳", self.bot.user)
|
||||||
|
await message.add_reaction("❌")
|
||||||
|
await self._log_message(
|
||||||
|
message.guild,
|
||||||
|
f"Failed to process video: {error}",
|
||||||
|
"error"
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Download video with enhanced error handling
|
||||||
|
try:
|
||||||
|
success, file_path, error = await self.components[guild_id][
|
||||||
|
"downloader"
|
||||||
|
].download_video(url)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Download error: {traceback.format_exc()}")
|
||||||
|
success, file_path, error = False, None, str(e)
|
||||||
|
|
||||||
|
if not success:
|
||||||
|
await message.remove_reaction("⏳", self.bot.user)
|
||||||
|
await message.add_reaction("❌")
|
||||||
|
await self._log_message(
|
||||||
|
message.guild,
|
||||||
|
f"Failed to download video: {error}",
|
||||||
|
"error"
|
||||||
|
)
|
||||||
|
# Track failed download for cleanup
|
||||||
|
if file_path:
|
||||||
|
async with self._failed_downloads_lock:
|
||||||
|
self._failed_downloads.add(file_path)
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Get channels with enhanced error handling
|
||||||
|
try:
|
||||||
|
archive_channel = await self.config.get_channel(message.guild, "archive")
|
||||||
|
notification_channel = await self.config.get_channel(message.guild, "notification")
|
||||||
|
if not notification_channel:
|
||||||
|
notification_channel = archive_channel
|
||||||
|
|
||||||
|
if not archive_channel or not notification_channel:
|
||||||
|
raise DiscordAPIError("Required channels not found")
|
||||||
|
except Exception as e:
|
||||||
|
await self._log_message(
|
||||||
|
message.guild,
|
||||||
|
f"Channel configuration error: {str(e)}",
|
||||||
|
"error"
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Upload to archive channel with original message link
|
||||||
|
file = discord.File(file_path)
|
||||||
|
archive_message = await archive_channel.send(
|
||||||
|
f"Original: {message.jump_url}",
|
||||||
|
file=file
|
||||||
|
)
|
||||||
|
|
||||||
|
# Send notification with enhanced error handling for message formatting
|
||||||
|
try:
|
||||||
|
notification_content = self.components[guild_id]["message_manager"].format_archive_message(
|
||||||
|
username=message.author.name,
|
||||||
|
channel=message.channel.name,
|
||||||
|
original_message=message.jump_url,
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Message formatting error: {str(e)}")
|
||||||
|
notification_content = f"Video archived from {message.author.name} in {message.channel.name}\nOriginal: {message.jump_url}"
|
||||||
|
|
||||||
|
notification_message = await notification_channel.send(notification_content)
|
||||||
|
|
||||||
|
# Schedule notification message deletion with error handling
|
||||||
|
try:
|
||||||
|
await self.components[guild_id][
|
||||||
|
"message_manager"
|
||||||
|
].schedule_message_deletion(
|
||||||
|
notification_message.id, notification_message.delete
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to schedule message deletion: {str(e)}")
|
||||||
|
|
||||||
|
# Update reaction to show completion
|
||||||
|
await message.remove_reaction("⏳", self.bot.user)
|
||||||
|
await message.add_reaction("✅")
|
||||||
|
|
||||||
|
# Log processing time
|
||||||
|
processing_time = (datetime.utcnow() - start_time).total_seconds()
|
||||||
|
await self._log_message(
|
||||||
|
message.guild,
|
||||||
|
f"Successfully archived video from {message.author} (took {processing_time:.1f}s)"
|
||||||
|
)
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
except discord.HTTPException as e:
|
||||||
|
await self._log_message(
|
||||||
|
message.guild,
|
||||||
|
f"Discord API error: {str(e)}",
|
||||||
|
"error"
|
||||||
|
)
|
||||||
|
await message.remove_reaction("⏳", self.bot.user)
|
||||||
|
await message.add_reaction("❌")
|
||||||
|
return False
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# Always attempt to delete the file if configured
|
||||||
|
if settings["delete_after_repost"] and file_path:
|
||||||
|
try:
|
||||||
|
if secure_delete_file(file_path):
|
||||||
|
await self._log_message(
|
||||||
|
message.guild,
|
||||||
|
f"Successfully deleted file: {file_path}"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
await self._log_message(
|
||||||
|
message.guild,
|
||||||
|
f"Failed to delete file: {file_path}",
|
||||||
|
"error"
|
||||||
|
)
|
||||||
|
# Emergency cleanup
|
||||||
|
cleanup_downloads(str(self.components[guild_id]["downloader"].download_path))
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"File deletion error: {str(e)}")
|
||||||
|
# Track for later cleanup
|
||||||
|
async with self._failed_downloads_lock:
|
||||||
|
self._failed_downloads.add(file_path)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Process callback error: {traceback.format_exc()}")
|
||||||
|
await self._log_message(
|
||||||
|
message.guild,
|
||||||
|
f"Error in process callback: {str(e)}",
|
||||||
|
"error"
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Add to enhanced queue with priority and error handling
|
||||||
|
try:
|
||||||
|
await self.queue_manager.add_to_queue(
|
||||||
|
url=url,
|
||||||
|
message_id=message.id,
|
||||||
|
channel_id=message.channel.id,
|
||||||
|
guild_id=guild_id,
|
||||||
|
author_id=message.author.id,
|
||||||
|
callback=process_callback,
|
||||||
|
priority=priority
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Queue error: {str(e)}")
|
||||||
|
await message.remove_reaction("⏳", self.bot.user)
|
||||||
|
await message.add_reaction("❌")
|
||||||
|
await self._log_message(
|
||||||
|
message.guild,
|
||||||
|
f"Failed to add to queue: {str(e)}",
|
||||||
|
"error"
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Log queue metrics with enhanced information
|
||||||
|
queue_status = self.queue_manager.get_queue_status(guild_id)
|
||||||
|
await self._log_message(
|
||||||
|
message.guild,
|
||||||
|
f"Queue Status - Pending: {queue_status['pending']}, "
|
||||||
|
f"Processing: {queue_status['processing']}, "
|
||||||
|
f"Success Rate: {queue_status['metrics']['success_rate']:.2%}, "
|
||||||
|
f"Avg Processing Time: {queue_status['metrics']['avg_processing_time']:.1f}s"
|
||||||
|
)
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error processing video: {traceback.format_exc()}")
|
||||||
|
await self._log_message(
|
||||||
|
message.guild,
|
||||||
|
f"Error processing video: {str(e)}",
|
||||||
|
"error"
|
||||||
|
)
|
||||||
|
await message.remove_reaction("⏳", self.bot.user)
|
||||||
|
await message.add_reaction("❌")
|
||||||
|
return False
|
||||||
|
|
||||||
|
async def process_message(self, message: discord.Message) -> None:
|
||||||
|
"""Process a message for video URLs"""
|
||||||
|
if message.author.bot or not message.guild:
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
settings = await self.config.get_guild_settings(message.guild.id)
|
||||||
|
|
||||||
|
# Check if message is in a monitored channel
|
||||||
|
if message.channel.id not in settings["monitored_channels"]:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Find all video URLs in message with improved pattern matching
|
||||||
|
urls = self._extract_urls(message.content)
|
||||||
|
|
||||||
|
if urls:
|
||||||
|
# Process each URL with priority based on position
|
||||||
|
for i, url in enumerate(urls):
|
||||||
|
# First URL gets highest priority
|
||||||
|
priority = len(urls) - i
|
||||||
|
await self.process_video_url(url, message, priority)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error processing message: {traceback.format_exc()}")
|
||||||
|
await self._log_message(
|
||||||
|
message.guild,
|
||||||
|
f"Error processing message: {str(e)}",
|
||||||
|
"error"
|
||||||
|
)
|
||||||
|
|
||||||
|
def _extract_urls(self, content: str) -> List[str]:
|
||||||
|
"""Extract video URLs from message content with improved pattern matching"""
|
||||||
|
urls = []
|
||||||
|
try:
|
||||||
|
with yt_dlp.YoutubeDL() as ydl:
|
||||||
|
for ie in ydl._ies:
|
||||||
|
if ie._VALID_URL:
|
||||||
|
# Use more specific pattern matching
|
||||||
|
pattern = f"(?P<url>{ie._VALID_URL})"
|
||||||
|
matches = re.finditer(pattern, content, re.IGNORECASE)
|
||||||
|
urls.extend(match.group("url") for match in matches)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"URL extraction error: {str(e)}")
|
||||||
|
return list(set(urls)) # Remove duplicates
|
||||||
|
|
||||||
|
async def _log_message(self, guild: discord.Guild, message: str, level: str = "info"):
|
||||||
|
"""Log a message to the guild's log channel with enhanced formatting"""
|
||||||
|
log_channel = await self.config.get_channel(guild, "log")
|
||||||
|
if log_channel:
|
||||||
|
try:
|
||||||
|
# Format message with timestamp and level
|
||||||
|
formatted_message = f"[{datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')}] [{level.upper()}] {message}"
|
||||||
|
await log_channel.send(formatted_message)
|
||||||
|
except discord.HTTPException as e:
|
||||||
|
logger.error(f"Failed to send log message to channel: {message} ({str(e)})")
|
||||||
|
logger.log(getattr(logging, level.upper()), message)
|
||||||
|
|
||||||
|
async def cleanup(self):
|
||||||
|
"""Clean up resources with enhanced error handling"""
|
||||||
|
try:
|
||||||
|
# Clean up queue
|
||||||
|
await self.queue_manager.cleanup()
|
||||||
|
|
||||||
|
# Clean up failed downloads
|
||||||
|
async with self._failed_downloads_lock:
|
||||||
|
for file_path in self._failed_downloads:
|
||||||
|
try:
|
||||||
|
if os.path.exists(file_path):
|
||||||
|
secure_delete_file(file_path)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to clean up file {file_path}: {str(e)}")
|
||||||
|
self._failed_downloads.clear()
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error during cleanup: {str(e)}")
|
||||||
247
videoarchiver/queue_manager.py
Normal file
247
videoarchiver/queue_manager.py
Normal file
@@ -0,0 +1,247 @@
|
|||||||
|
import asyncio
|
||||||
|
import logging
|
||||||
|
from typing import Dict, Optional, Set, Tuple, Callable, Any
|
||||||
|
from datetime import datetime
|
||||||
|
import traceback
|
||||||
|
from dataclasses import dataclass
|
||||||
|
import weakref
|
||||||
|
|
||||||
|
# Configure logging
|
||||||
|
logging.basicConfig(
|
||||||
|
level=logging.INFO,
|
||||||
|
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||||
|
)
|
||||||
|
logger = logging.getLogger('QueueManager')
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class QueueItem:
|
||||||
|
"""Represents a video processing task in the queue"""
|
||||||
|
url: str
|
||||||
|
message_id: int
|
||||||
|
channel_id: int
|
||||||
|
guild_id: int
|
||||||
|
author_id: int
|
||||||
|
added_at: datetime
|
||||||
|
callback: Callable[[str, bool, str], Any]
|
||||||
|
status: str = "pending" # pending, processing, completed, failed
|
||||||
|
error: Optional[str] = None
|
||||||
|
attempt: int = 0
|
||||||
|
|
||||||
|
class VideoQueueManager:
|
||||||
|
"""Manages a queue of videos to be processed, ensuring sequential processing"""
|
||||||
|
|
||||||
|
def __init__(self, max_retries: int = 3, retry_delay: int = 5):
|
||||||
|
self.max_retries = max_retries
|
||||||
|
self.retry_delay = retry_delay
|
||||||
|
|
||||||
|
# Queue storage
|
||||||
|
self._queue: asyncio.Queue[QueueItem] = asyncio.Queue()
|
||||||
|
self._processing: Dict[str, QueueItem] = {}
|
||||||
|
self._failed: Dict[str, QueueItem] = {}
|
||||||
|
self._completed: Dict[str, QueueItem] = {}
|
||||||
|
|
||||||
|
# Track active tasks
|
||||||
|
self._active_tasks: Set[asyncio.Task] = set()
|
||||||
|
self._processing_lock = asyncio.Lock()
|
||||||
|
|
||||||
|
# Status tracking
|
||||||
|
self._guild_queues: Dict[int, Set[str]] = {}
|
||||||
|
self._channel_queues: Dict[int, Set[str]] = {}
|
||||||
|
|
||||||
|
# Cleanup references
|
||||||
|
self._weak_refs: Set[weakref.ref] = set()
|
||||||
|
|
||||||
|
# Start queue processor
|
||||||
|
self._processor_task = asyncio.create_task(self._process_queue())
|
||||||
|
self._active_tasks.add(self._processor_task)
|
||||||
|
|
||||||
|
async def add_to_queue(
|
||||||
|
self,
|
||||||
|
url: str,
|
||||||
|
message_id: int,
|
||||||
|
channel_id: int,
|
||||||
|
guild_id: int,
|
||||||
|
author_id: int,
|
||||||
|
callback: Callable[[str, bool, str], Any]
|
||||||
|
) -> bool:
|
||||||
|
"""Add a video to the processing queue"""
|
||||||
|
try:
|
||||||
|
# Create queue item
|
||||||
|
item = QueueItem(
|
||||||
|
url=url,
|
||||||
|
message_id=message_id,
|
||||||
|
channel_id=channel_id,
|
||||||
|
guild_id=guild_id,
|
||||||
|
author_id=author_id,
|
||||||
|
added_at=datetime.utcnow(),
|
||||||
|
callback=callback
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add to tracking collections
|
||||||
|
if guild_id not in self._guild_queues:
|
||||||
|
self._guild_queues[guild_id] = set()
|
||||||
|
self._guild_queues[guild_id].add(url)
|
||||||
|
|
||||||
|
if channel_id not in self._channel_queues:
|
||||||
|
self._channel_queues[channel_id] = set()
|
||||||
|
self._channel_queues[channel_id].add(url)
|
||||||
|
|
||||||
|
# Add to queue
|
||||||
|
await self._queue.put(item)
|
||||||
|
|
||||||
|
# Create weak reference for cleanup
|
||||||
|
self._weak_refs.add(weakref.ref(item))
|
||||||
|
|
||||||
|
logger.info(f"Added video to queue: {url}")
|
||||||
|
return True
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error adding video to queue: {str(e)}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
async def _process_queue(self):
|
||||||
|
"""Process videos in the queue sequentially"""
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
# Get next item from queue
|
||||||
|
item = await self._queue.get()
|
||||||
|
|
||||||
|
async with self._processing_lock:
|
||||||
|
self._processing[item.url] = item
|
||||||
|
item.status = "processing"
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Execute callback with the URL
|
||||||
|
success = await item.callback(item.url, True, "")
|
||||||
|
|
||||||
|
if success:
|
||||||
|
item.status = "completed"
|
||||||
|
self._completed[item.url] = item
|
||||||
|
logger.info(f"Successfully processed video: {item.url}")
|
||||||
|
else:
|
||||||
|
# Handle retry logic
|
||||||
|
item.attempt += 1
|
||||||
|
if item.attempt < self.max_retries:
|
||||||
|
# Re-queue with delay
|
||||||
|
await asyncio.sleep(self.retry_delay * item.attempt)
|
||||||
|
await self._queue.put(item)
|
||||||
|
logger.info(f"Retrying video processing: {item.url} (Attempt {item.attempt + 1})")
|
||||||
|
else:
|
||||||
|
item.status = "failed"
|
||||||
|
item.error = "Max retries exceeded"
|
||||||
|
self._failed[item.url] = item
|
||||||
|
logger.error(f"Failed to process video after {self.max_retries} attempts: {item.url}")
|
||||||
|
|
||||||
|
# Notify callback of failure
|
||||||
|
await item.callback(item.url, False, item.error)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error processing video: {str(e)}\n{traceback.format_exc()}")
|
||||||
|
item.status = "failed"
|
||||||
|
item.error = str(e)
|
||||||
|
self._failed[item.url] = item
|
||||||
|
|
||||||
|
# Notify callback of failure
|
||||||
|
await item.callback(item.url, False, str(e))
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# Clean up tracking
|
||||||
|
self._processing.pop(item.url, None)
|
||||||
|
if item.guild_id in self._guild_queues:
|
||||||
|
self._guild_queues[item.guild_id].discard(item.url)
|
||||||
|
if item.channel_id in self._channel_queues:
|
||||||
|
self._channel_queues[item.channel_id].discard(item.url)
|
||||||
|
|
||||||
|
# Mark queue item as done
|
||||||
|
self._queue.task_done()
|
||||||
|
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
break
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Queue processor error: {str(e)}\n{traceback.format_exc()}")
|
||||||
|
await asyncio.sleep(1) # Prevent tight error loop
|
||||||
|
|
||||||
|
def get_queue_status(self, guild_id: Optional[int] = None) -> Dict[str, int]:
|
||||||
|
"""Get current queue status, optionally filtered by guild"""
|
||||||
|
if guild_id is not None:
|
||||||
|
guild_urls = self._guild_queues.get(guild_id, set())
|
||||||
|
return {
|
||||||
|
"pending": sum(1 for _ in self._queue._queue if _.url in guild_urls),
|
||||||
|
"processing": sum(1 for url in self._processing if url in guild_urls),
|
||||||
|
"completed": sum(1 for url in self._completed if url in guild_urls),
|
||||||
|
"failed": sum(1 for url in self._failed if url in guild_urls)
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
return {
|
||||||
|
"pending": self._queue.qsize(),
|
||||||
|
"processing": len(self._processing),
|
||||||
|
"completed": len(self._completed),
|
||||||
|
"failed": len(self._failed)
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_channel_queue_size(self, channel_id: int) -> int:
|
||||||
|
"""Get number of items queued for a specific channel"""
|
||||||
|
return len(self._channel_queues.get(channel_id, set()))
|
||||||
|
|
||||||
|
async def clear_guild_queue(self, guild_id: int) -> int:
|
||||||
|
"""Clear all queued items for a specific guild"""
|
||||||
|
if guild_id not in self._guild_queues:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
cleared = 0
|
||||||
|
guild_urls = self._guild_queues[guild_id].copy()
|
||||||
|
|
||||||
|
# Remove from main queue
|
||||||
|
new_queue = asyncio.Queue()
|
||||||
|
while not self._queue.empty():
|
||||||
|
item = await self._queue.get()
|
||||||
|
if item.guild_id != guild_id:
|
||||||
|
await new_queue.put(item)
|
||||||
|
else:
|
||||||
|
cleared += 1
|
||||||
|
|
||||||
|
self._queue = new_queue
|
||||||
|
|
||||||
|
# Clean up tracking
|
||||||
|
for url in guild_urls:
|
||||||
|
self._processing.pop(url, None)
|
||||||
|
self._completed.pop(url, None)
|
||||||
|
self._failed.pop(url, None)
|
||||||
|
|
||||||
|
self._guild_queues.pop(guild_id, None)
|
||||||
|
|
||||||
|
# Clean up channel queues
|
||||||
|
for channel_id, urls in list(self._channel_queues.items()):
|
||||||
|
urls.difference_update(guild_urls)
|
||||||
|
if not urls:
|
||||||
|
self._channel_queues.pop(channel_id, None)
|
||||||
|
|
||||||
|
return cleared
|
||||||
|
|
||||||
|
async def cleanup(self):
|
||||||
|
"""Clean up resources and stop queue processing"""
|
||||||
|
# Cancel processor task
|
||||||
|
if self._processor_task and not self._processor_task.done():
|
||||||
|
self._processor_task.cancel()
|
||||||
|
try:
|
||||||
|
await self._processor_task
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Cancel all active tasks
|
||||||
|
for task in self._active_tasks:
|
||||||
|
if not task.done():
|
||||||
|
task.cancel()
|
||||||
|
|
||||||
|
await asyncio.gather(*self._active_tasks, return_exceptions=True)
|
||||||
|
|
||||||
|
# Clear all collections
|
||||||
|
self._queue = asyncio.Queue()
|
||||||
|
self._processing.clear()
|
||||||
|
self._completed.clear()
|
||||||
|
self._failed.clear()
|
||||||
|
self._guild_queues.clear()
|
||||||
|
self._channel_queues.clear()
|
||||||
|
|
||||||
|
# Clear weak references
|
||||||
|
self._weak_refs.clear()
|
||||||
299
videoarchiver/update_checker.py
Normal file
299
videoarchiver/update_checker.py
Normal file
@@ -0,0 +1,299 @@
|
|||||||
|
"""Update checker for yt-dlp"""
|
||||||
|
import logging
|
||||||
|
import pkg_resources
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
import aiohttp
|
||||||
|
from packaging import version
|
||||||
|
import discord
|
||||||
|
from typing import Optional, Tuple, Dict, Any
|
||||||
|
import asyncio
|
||||||
|
import sys
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
import subprocess
|
||||||
|
import tempfile
|
||||||
|
import os
|
||||||
|
|
||||||
|
from .exceptions import UpdateError
|
||||||
|
|
||||||
|
logger = logging.getLogger('VideoArchiver')
|
||||||
|
|
||||||
|
class UpdateChecker:
|
||||||
|
"""Handles checking for yt-dlp updates"""
|
||||||
|
|
||||||
|
GITHUB_API_URL = 'https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest'
|
||||||
|
UPDATE_CHECK_INTERVAL = 21600 # 6 hours in seconds
|
||||||
|
MAX_RETRIES = 3
|
||||||
|
RETRY_DELAY = 5
|
||||||
|
REQUEST_TIMEOUT = 30
|
||||||
|
SUBPROCESS_TIMEOUT = 300 # 5 minutes
|
||||||
|
|
||||||
|
def __init__(self, bot, config_manager):
|
||||||
|
self.bot = bot
|
||||||
|
self.config = config_manager
|
||||||
|
self._check_task = None
|
||||||
|
self._session: Optional[aiohttp.ClientSession] = None
|
||||||
|
self._rate_limit_reset = 0
|
||||||
|
self._remaining_requests = 60
|
||||||
|
self._last_version_check: Dict[int, datetime] = {}
|
||||||
|
|
||||||
|
async def _init_session(self) -> None:
|
||||||
|
"""Initialize aiohttp session with proper headers"""
|
||||||
|
if self._session is None or self._session.closed:
|
||||||
|
self._session = aiohttp.ClientSession(
|
||||||
|
headers={
|
||||||
|
'Accept': 'application/vnd.github.v3+json',
|
||||||
|
'User-Agent': 'VideoArchiver-Bot'
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
async def start(self) -> None:
|
||||||
|
"""Start the update checker task"""
|
||||||
|
if self._check_task is None:
|
||||||
|
await self._init_session()
|
||||||
|
self._check_task = self.bot.loop.create_task(self._check_loop())
|
||||||
|
logger.info("Update checker task started")
|
||||||
|
|
||||||
|
async def stop(self) -> None:
|
||||||
|
"""Stop the update checker task and cleanup"""
|
||||||
|
if self._check_task:
|
||||||
|
self._check_task.cancel()
|
||||||
|
self._check_task = None
|
||||||
|
|
||||||
|
if self._session and not self._session.closed:
|
||||||
|
await self._session.close()
|
||||||
|
self._session = None
|
||||||
|
|
||||||
|
logger.info("Update checker task stopped")
|
||||||
|
|
||||||
|
async def _check_loop(self) -> None:
|
||||||
|
"""Periodic update check loop with improved error handling"""
|
||||||
|
await self.bot.wait_until_ready()
|
||||||
|
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
all_guilds = await self.config.config.all_guilds()
|
||||||
|
current_time = datetime.utcnow()
|
||||||
|
|
||||||
|
for guild_id, settings in all_guilds.items():
|
||||||
|
try:
|
||||||
|
if settings.get('disable_update_check', False):
|
||||||
|
continue
|
||||||
|
|
||||||
|
guild = self.bot.get_guild(guild_id)
|
||||||
|
if not guild:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check if we've checked recently
|
||||||
|
last_check = self._last_version_check.get(guild_id)
|
||||||
|
if last_check and (current_time - last_check).total_seconds() < self.UPDATE_CHECK_INTERVAL:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check rate limits
|
||||||
|
if self._remaining_requests <= 0:
|
||||||
|
if current_time.timestamp() < self._rate_limit_reset:
|
||||||
|
continue
|
||||||
|
# Reset rate limit counters
|
||||||
|
self._remaining_requests = 60
|
||||||
|
self._rate_limit_reset = 0
|
||||||
|
|
||||||
|
await self._check_guild(guild, settings)
|
||||||
|
self._last_version_check[guild_id] = current_time
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error checking updates for guild {guild_id}: {str(e)}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error in update check task: {str(e)}")
|
||||||
|
|
||||||
|
await asyncio.sleep(self.UPDATE_CHECK_INTERVAL)
|
||||||
|
|
||||||
|
async def _check_guild(self, guild: discord.Guild, settings: dict) -> None:
|
||||||
|
"""Check updates for a specific guild with improved error handling"""
|
||||||
|
try:
|
||||||
|
current_version = self._get_current_version()
|
||||||
|
if not current_version:
|
||||||
|
await self._log_error(
|
||||||
|
guild,
|
||||||
|
UpdateError("Could not determine current yt-dlp version"),
|
||||||
|
"checking current version"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
latest_version = await self._get_latest_version()
|
||||||
|
if not latest_version:
|
||||||
|
return # Error already logged in _get_latest_version
|
||||||
|
|
||||||
|
# Update last check time
|
||||||
|
await self.config.config.guild(guild).last_update_check.set(
|
||||||
|
datetime.utcnow().isoformat()
|
||||||
|
)
|
||||||
|
|
||||||
|
# Compare versions
|
||||||
|
if version.parse(current_version) < version.parse(latest_version):
|
||||||
|
await self._notify_update(guild, current_version, latest_version, settings)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
await self._log_error(guild, e, "checking for updates")
|
||||||
|
|
||||||
|
def _get_current_version(self) -> Optional[str]:
|
||||||
|
"""Get current yt-dlp version with error handling"""
|
||||||
|
try:
|
||||||
|
return pkg_resources.get_distribution('yt-dlp').version
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error getting current version: {str(e)}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def _get_latest_version(self) -> Optional[str]:
|
||||||
|
"""Get the latest version from GitHub with retries and rate limit handling"""
|
||||||
|
await self._init_session()
|
||||||
|
|
||||||
|
for attempt in range(self.MAX_RETRIES):
|
||||||
|
try:
|
||||||
|
async with self._session.get(
|
||||||
|
self.GITHUB_API_URL,
|
||||||
|
timeout=aiohttp.ClientTimeout(total=self.REQUEST_TIMEOUT)
|
||||||
|
) as response:
|
||||||
|
# Update rate limit info
|
||||||
|
self._remaining_requests = int(response.headers.get('X-RateLimit-Remaining', 0))
|
||||||
|
self._rate_limit_reset = int(response.headers.get('X-RateLimit-Reset', 0))
|
||||||
|
|
||||||
|
if response.status == 200:
|
||||||
|
data = await response.json()
|
||||||
|
return data['tag_name'].lstrip('v')
|
||||||
|
elif response.status == 403 and 'X-RateLimit-Remaining' in response.headers:
|
||||||
|
logger.warning("GitHub API rate limit reached")
|
||||||
|
return None
|
||||||
|
elif response.status == 404:
|
||||||
|
raise UpdateError("GitHub API endpoint not found")
|
||||||
|
else:
|
||||||
|
raise UpdateError(f"GitHub API returned status {response.status}")
|
||||||
|
|
||||||
|
except asyncio.TimeoutError:
|
||||||
|
logger.error(f"Timeout getting latest version (attempt {attempt + 1}/{self.MAX_RETRIES})")
|
||||||
|
if attempt == self.MAX_RETRIES - 1:
|
||||||
|
return None
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error getting latest version (attempt {attempt + 1}/{self.MAX_RETRIES}): {str(e)}")
|
||||||
|
if attempt == self.MAX_RETRIES - 1:
|
||||||
|
return None
|
||||||
|
|
||||||
|
await asyncio.sleep(self.RETRY_DELAY * (attempt + 1))
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
async def _notify_update(
|
||||||
|
self,
|
||||||
|
guild: discord.Guild,
|
||||||
|
current_version: str,
|
||||||
|
latest_version: str,
|
||||||
|
settings: dict
|
||||||
|
) -> None:
|
||||||
|
"""Notify about available updates with retry mechanism"""
|
||||||
|
owner = self.bot.get_user(self.bot.owner_id)
|
||||||
|
if not owner:
|
||||||
|
await self._log_error(
|
||||||
|
guild,
|
||||||
|
UpdateError("Could not find bot owner"),
|
||||||
|
"sending update notification"
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
message = (
|
||||||
|
f"⚠️ A new version of yt-dlp is available!\n"
|
||||||
|
f"Current: {current_version}\n"
|
||||||
|
f"Latest: {latest_version}\n"
|
||||||
|
f"Use `[p]videoarchiver updateytdlp` to update."
|
||||||
|
)
|
||||||
|
|
||||||
|
for attempt in range(settings.get("discord_retry_attempts", 3)):
|
||||||
|
try:
|
||||||
|
await owner.send(message)
|
||||||
|
return
|
||||||
|
except discord.HTTPException as e:
|
||||||
|
if attempt == settings["discord_retry_attempts"] - 1:
|
||||||
|
await self._log_error(
|
||||||
|
guild,
|
||||||
|
UpdateError(f"Failed to send update notification: {str(e)}"),
|
||||||
|
"sending update notification"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
await asyncio.sleep(settings.get("discord_retry_delay", 5))
|
||||||
|
|
||||||
|
async def _log_error(self, guild: discord.Guild, error: Exception, context: str) -> None:
|
||||||
|
"""Log an error to the guild's log channel with enhanced formatting"""
|
||||||
|
timestamp = datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S")
|
||||||
|
error_message = f"[{timestamp}] Error {context}: {str(error)}"
|
||||||
|
|
||||||
|
log_channel = await self.config.get_channel(guild, "log")
|
||||||
|
if log_channel:
|
||||||
|
try:
|
||||||
|
await log_channel.send(f"```\n{error_message}\n```")
|
||||||
|
except discord.HTTPException as e:
|
||||||
|
logger.error(f"Failed to send error to log channel: {str(e)}")
|
||||||
|
|
||||||
|
logger.error(f"Guild {guild.id} - {error_message}")
|
||||||
|
|
||||||
|
async def update_yt_dlp(self) -> Tuple[bool, str]:
|
||||||
|
"""Update yt-dlp to the latest version with improved error handling"""
|
||||||
|
temp_dir = None
|
||||||
|
try:
|
||||||
|
# Create temporary directory for pip output
|
||||||
|
temp_dir = tempfile.mkdtemp(prefix='ytdlp_update_')
|
||||||
|
log_file = Path(temp_dir) / 'pip_log.txt'
|
||||||
|
|
||||||
|
# Prepare pip command
|
||||||
|
cmd = [
|
||||||
|
sys.executable,
|
||||||
|
'-m',
|
||||||
|
'pip',
|
||||||
|
'install',
|
||||||
|
'--upgrade',
|
||||||
|
'yt-dlp',
|
||||||
|
'--log',
|
||||||
|
str(log_file)
|
||||||
|
]
|
||||||
|
|
||||||
|
# Run pip in subprocess with timeout
|
||||||
|
process = await asyncio.create_subprocess_exec(
|
||||||
|
*cmd,
|
||||||
|
stdout=asyncio.subprocess.PIPE,
|
||||||
|
stderr=asyncio.subprocess.PIPE
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
stdout, stderr = await asyncio.wait_for(
|
||||||
|
process.communicate(),
|
||||||
|
timeout=self.SUBPROCESS_TIMEOUT
|
||||||
|
)
|
||||||
|
except asyncio.TimeoutError:
|
||||||
|
process.kill()
|
||||||
|
raise UpdateError("Update process timed out")
|
||||||
|
|
||||||
|
if process.returncode == 0:
|
||||||
|
new_version = self._get_current_version()
|
||||||
|
if new_version:
|
||||||
|
return True, f"Successfully updated to version {new_version}"
|
||||||
|
return True, "Successfully updated (version unknown)"
|
||||||
|
else:
|
||||||
|
# Read detailed error log
|
||||||
|
error_details = "Unknown error"
|
||||||
|
if log_file.exists():
|
||||||
|
try:
|
||||||
|
error_details = log_file.read_text(errors='ignore')
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
return False, f"Failed to update: {error_details}"
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
return False, f"Error updating: {str(e)}"
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# Cleanup temporary directory
|
||||||
|
if temp_dir and os.path.exists(temp_dir):
|
||||||
|
try:
|
||||||
|
shutil.rmtree(temp_dir)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to cleanup temporary directory: {str(e)}")
|
||||||
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user