mirror of
https://github.com/pacnpal/Pac-cogs.git
synced 2025-12-20 10:51:05 -05:00
commit
This commit is contained in:
@@ -1,347 +0,0 @@
|
||||
# VideoArchiver Cog for Red-DiscordBot
|
||||
|
||||
A powerful video archiving cog that automatically downloads and reposts videos from monitored channels, with support for GPU-accelerated compression, multi-video processing, and role-based permissions.
|
||||
|
||||
## Features
|
||||
|
||||
- **Hardware-Accelerated Video Processing**:
|
||||
- NVIDIA GPU support using NVENC
|
||||
- AMD GPU support using AMF
|
||||
- Intel GPU support using QuickSync
|
||||
- ARM64/aarch64 support with V4L2 M2M encoder
|
||||
- Multi-core CPU optimization
|
||||
- **Smart Video Processing**:
|
||||
- Intelligent quality preservation
|
||||
- Only compresses when needed
|
||||
- Concurrent video processing
|
||||
- Default 8MB file size limit
|
||||
- **Role-Based Access**:
|
||||
- Restrict archiving to specific roles
|
||||
- Default allows all users
|
||||
- Per-guild role configuration
|
||||
- **Wide Platform Support**:
|
||||
- Support for multiple video platforms via [yt-dlp](https://github.com/yt-dlp/yt-dlp)
|
||||
- Configurable site whitelist
|
||||
- Automatic quality selection
|
||||
|
||||
## Quick Installation
|
||||
|
||||
1. Install the cog:
|
||||
```
|
||||
[p]repo add video-archiver https://github.com/yourusername/discord-video-bot
|
||||
[p]cog install video-archiver video_archiver
|
||||
[p]load video_archiver
|
||||
```
|
||||
|
||||
The required dependencies will be installed automatically. If you need to install them manually:
|
||||
```bash
|
||||
python -m pip install -U yt-dlp>=2024.11.4 ffmpeg-python>=0.2.0 requests>=2.32.3
|
||||
```
|
||||
|
||||
### Important: Keeping yt-dlp Updated
|
||||
|
||||
The cog relies on [yt-dlp](https://github.com/yt-dlp/yt-dlp) for video downloading. Video platforms frequently update their sites, which may break video downloading if yt-dlp is outdated. To ensure continued functionality, regularly update yt-dlp:
|
||||
|
||||
```bash
|
||||
[p]pipinstall --upgrade yt-dlp
|
||||
|
||||
# Or manually:
|
||||
python -m pip install -U yt-dlp
|
||||
```
|
||||
|
||||
**Note**: Before submitting any GitHub issues related to video downloading, please ensure you have updated yt-dlp to the latest version first, as most downloading issues can be resolved by updating.
|
||||
|
||||
## Configuration
|
||||
|
||||
The cog supports both slash commands and traditional prefix commands. Use whichever style you prefer.
|
||||
|
||||
### Channel Setup
|
||||
```
|
||||
/videoarchiver setchannel #archive-channel # Set archive channel
|
||||
/videoarchiver setnotification #notify-channel # Set notification channel
|
||||
/videoarchiver setlogchannel #log-channel # Set log channel for errors/notifications
|
||||
/videoarchiver addmonitor #videos-channel # Add channel to monitor
|
||||
/videoarchiver removemonitor #channel # Remove monitored channel
|
||||
|
||||
# Legacy commands also supported:
|
||||
[p]videoarchiver setchannel #channel
|
||||
[p]videoarchiver setnotification #channel
|
||||
etc.
|
||||
```
|
||||
|
||||
### Role Management
|
||||
```
|
||||
/videoarchiver addrole @role # Add role that can trigger archiving
|
||||
/videoarchiver removerole @role # Remove role from allowed list
|
||||
/videoarchiver listroles # List all allowed roles (empty = all allowed)
|
||||
```
|
||||
|
||||
### Video Settings
|
||||
```
|
||||
/videoarchiver setformat mp4 # Set video format
|
||||
/videoarchiver setquality 1080 # Set max quality (pixels)
|
||||
/videoarchiver setmaxsize 8 # Set max size (MB, default 8MB)
|
||||
/videoarchiver toggledelete # Toggle file cleanup
|
||||
```
|
||||
|
||||
### Message Settings
|
||||
```
|
||||
/videoarchiver setduration 24 # Set message duration (hours)
|
||||
/videoarchiver settemplate "Archived video from {author}\nOriginal: {original_message}"
|
||||
/videoarchiver enablesites # Configure allowed sites
|
||||
```
|
||||
|
||||
## Architecture Support
|
||||
|
||||
The cog supports multiple architectures:
|
||||
- x86_64/amd64
|
||||
- ARM64/aarch64
|
||||
- ARMv7 (32-bit)
|
||||
- Apple Silicon (M1/M2)
|
||||
|
||||
Hardware acceleration is automatically configured based on your system:
|
||||
- x86_64: Full GPU support (NVIDIA, AMD, Intel)
|
||||
- ARM64: V4L2 M2M hardware encoding when available
|
||||
- All platforms: Multi-core CPU optimization
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
1. **Permission Issues**:
|
||||
- Bot needs "Manage Messages" permission
|
||||
- Bot needs "Attach Files" permission
|
||||
- Bot needs "Read Message History" permission
|
||||
- Bot needs "Use Application Commands" for slash commands
|
||||
|
||||
2. **Video Processing Issues**:
|
||||
- Ensure FFmpeg is properly installed
|
||||
- Check GPU drivers are up to date
|
||||
- Verify file permissions in the downloads directory
|
||||
- Update yt-dlp if videos fail to download
|
||||
|
||||
3. **Role Issues**:
|
||||
- Verify role hierarchy (bot's role must be higher than managed roles)
|
||||
- Check if roles are properly configured
|
||||
|
||||
4. **Performance Issues**:
|
||||
- Check available disk space
|
||||
- Monitor system resource usage
|
||||
|
||||
## Support
|
||||
|
||||
For support:
|
||||
1. First, check the [Troubleshooting](#troubleshooting) section above
|
||||
2. Update yt-dlp to the latest version:
|
||||
```bash
|
||||
[p]pipinstall --upgrade yt-dlp
|
||||
# Or manually:
|
||||
python -m pip install -U yt-dlp
|
||||
```
|
||||
3. If the issue persists after updating yt-dlp:
|
||||
- Join the Red-DiscordBot server and ask in the #support channel
|
||||
- Open an issue on GitHub with:
|
||||
- Your Red-Bot version
|
||||
- The output of `[p]pipinstall list`
|
||||
- Steps to reproduce the issue
|
||||
- Any error messages
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions are welcome! Please feel free to submit a Pull Request.
|
||||
|
||||
Before submitting an issue:
|
||||
1. Update yt-dlp to the latest version first:
|
||||
```bash
|
||||
[p]pipinstall --upgrade yt-dlp
|
||||
# Or manually:
|
||||
python -m pip install -U yt-dlp
|
||||
```
|
||||
2. If the issue persists after updating yt-dlp, please include:
|
||||
- Your Red-Bot version
|
||||
- The output of `[p]pipinstall list`
|
||||
- Steps to reproduce the issue
|
||||
- Any error messages
|
||||
|
||||
## License
|
||||
|
||||
This cog is licensed under the MIT License - see the [LICENSE](../LICENSE) file for details.
|
||||
|
||||
# VideoArchiver Cog for Red-DiscordBot
|
||||
|
||||
A powerful video archiving cog that automatically downloads and reposts videos from monitored channels, with support for GPU-accelerated compression, multi-video processing, and role-based permissions.
|
||||
|
||||
## Features
|
||||
|
||||
- **Hardware-Accelerated Video Processing**:
|
||||
- NVIDIA GPU support using NVENC
|
||||
- AMD GPU support using AMF
|
||||
- Intel GPU support using QuickSync
|
||||
- ARM64/aarch64 support with V4L2 M2M encoder
|
||||
- Multi-core CPU optimization
|
||||
- **Smart Video Processing**:
|
||||
- Intelligent quality preservation
|
||||
- Only compresses when needed
|
||||
- Concurrent video processing
|
||||
- Default 8MB file size limit
|
||||
- **Role-Based Access**:
|
||||
- Restrict archiving to specific roles
|
||||
- Default allows all users
|
||||
- Per-guild role configuration
|
||||
- **Wide Platform Support**:
|
||||
- Support for multiple video platforms via [yt-dlp](https://github.com/yt-dlp/yt-dlp)
|
||||
- Configurable site whitelist
|
||||
- Automatic quality selection
|
||||
|
||||
## Quick Installation
|
||||
|
||||
1. Install the cog:
|
||||
|
||||
```
|
||||
[p]repo add video-archiver https://github.com/yourusername/discord-video-bot
|
||||
[p]cog install video-archiver video_archiver
|
||||
[p]load video_archiver
|
||||
```
|
||||
|
||||
The required dependencies will be installed automatically. If you need to install them manually:
|
||||
|
||||
```bash
|
||||
python -m pip install -U yt-dlp>=2024.11.4 ffmpeg-python>=0.2.0 requests>=2.32.3
|
||||
```
|
||||
|
||||
### Important: Keeping yt-dlp Updated
|
||||
|
||||
The cog relies on [yt-dlp](https://github.com/yt-dlp/yt-dlp) for video downloading. Video platforms frequently update their sites, which may break video downloading if yt-dlp is outdated. To ensure continued functionality, regularly update yt-dlp:
|
||||
|
||||
```bash
|
||||
[p]pipinstall --upgrade yt-dlp
|
||||
|
||||
# Or manually:
|
||||
python -m pip install -U yt-dlp
|
||||
```
|
||||
|
||||
**Note**: Before submitting any GitHub issues related to video downloading, please ensure you have updated yt-dlp to the latest version first, as most downloading issues can be resolved by updating.
|
||||
|
||||
## Configuration
|
||||
|
||||
The cog supports both slash commands and traditional prefix commands. Use whichever style you prefer.
|
||||
|
||||
### Channel Setup
|
||||
|
||||
```
|
||||
/videoarchiver setchannel #archive-channel # Set archive channel
|
||||
/videoarchiver setnotification #notify-channel # Set notification channel
|
||||
/videoarchiver setlogchannel #log-channel # Set log channel for errors/notifications
|
||||
/videoarchiver addmonitor #videos-channel # Add channel to monitor
|
||||
/videoarchiver removemonitor #channel # Remove monitored channel
|
||||
|
||||
# Legacy commands also supported:
|
||||
[p]videoarchiver setchannel #channel
|
||||
[p]videoarchiver setnotification #channel
|
||||
etc.
|
||||
```
|
||||
|
||||
### Role Management
|
||||
|
||||
```
|
||||
/videoarchiver addrole @role # Add role that can trigger archiving
|
||||
/videoarchiver removerole @role # Remove role from allowed list
|
||||
/videoarchiver listroles # List all allowed roles (empty = all allowed)
|
||||
```
|
||||
|
||||
### Video Settings
|
||||
|
||||
```
|
||||
/videoarchiver setformat mp4 # Set video format
|
||||
/videoarchiver setquality 1080 # Set max quality (pixels)
|
||||
/videoarchiver setmaxsize 8 # Set max size (MB, default 8MB)
|
||||
/videoarchiver toggledelete # Toggle file cleanup
|
||||
```
|
||||
|
||||
### Message Settings
|
||||
|
||||
```
|
||||
/videoarchiver setduration 24 # Set message duration (hours)
|
||||
/videoarchiver settemplate "Archived video from {author}\nOriginal: {original_message}"
|
||||
/videoarchiver enablesites # Configure allowed sites
|
||||
```
|
||||
|
||||
## Architecture Support
|
||||
|
||||
The cog supports multiple architectures:
|
||||
|
||||
- x86_64/amd64
|
||||
- ARM64/aarch64
|
||||
- ARMv7 (32-bit)
|
||||
- Apple Silicon (M1/M2)
|
||||
|
||||
Hardware acceleration is automatically configured based on your system:
|
||||
|
||||
- x86_64: Full GPU support (NVIDIA, AMD, Intel)
|
||||
- ARM64: V4L2 M2M hardware encoding when available
|
||||
- All platforms: Multi-core CPU optimization
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
1. **Permission Issues**:
|
||||
- Bot needs "Manage Messages" permission
|
||||
- Bot needs "Attach Files" permission
|
||||
- Bot needs "Read Message History" permission
|
||||
- Bot needs "Use Application Commands" for slash commands
|
||||
|
||||
2. **Video Processing Issues**:
|
||||
- Ensure FFmpeg is properly installed
|
||||
- Check GPU drivers are up to date
|
||||
- Verify file permissions in the downloads directory
|
||||
- Update yt-dlp if videos fail to download
|
||||
|
||||
3. **Role Issues**:
|
||||
- Verify role hierarchy (bot's role must be higher than managed roles)
|
||||
- Check if roles are properly configured
|
||||
|
||||
4. **Performance Issues**:
|
||||
- Check available disk space
|
||||
- Monitor system resource usage
|
||||
|
||||
## Support
|
||||
|
||||
For support:
|
||||
|
||||
1. First, check the [Troubleshooting](#troubleshooting) section above
|
||||
2. Update yt-dlp to the latest version:
|
||||
|
||||
```bash
|
||||
[p]pipinstall --upgrade yt-dlp
|
||||
# Or manually:
|
||||
python -m pip install -U yt-dlp
|
||||
```
|
||||
|
||||
3. If the issue persists after updating yt-dlp:
|
||||
- Join the Red-DiscordBot server and ask in the #support channel
|
||||
- Open an issue on GitHub with:
|
||||
- Your Red-Bot version
|
||||
- The output of `[p]pipinstall list`
|
||||
- Steps to reproduce the issue
|
||||
- Any error messages
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions are welcome! Please feel free to submit a Pull Request.
|
||||
|
||||
Before submitting an issue:
|
||||
|
||||
1. Update yt-dlp to the latest version first:
|
||||
|
||||
```bash
|
||||
[p]pipinstall --upgrade yt-dlp
|
||||
# Or manually:
|
||||
python -m pip install -U yt-dlp
|
||||
```
|
||||
|
||||
2. If the issue persists after updating yt-dlp, please include:
|
||||
- Your Red-Bot version
|
||||
- The output of `[p]pipinstall list`
|
||||
- Steps to reproduce the issue
|
||||
- Any error messages
|
||||
|
||||
## License
|
||||
|
||||
This cog is licensed under the MIT License - see the [LICENSE](../LICENSE) file for details.
|
||||
@@ -1,9 +0,0 @@
|
||||
{
|
||||
"author": [
|
||||
"PacNPal"
|
||||
],
|
||||
"install_msg": "Thank you for installing the Pac-cogs repo!",
|
||||
"name": "Pac-cogs",
|
||||
"short": "Very cool cogs!",
|
||||
"description": "Right now, just a birthday cog."
|
||||
}
|
||||
@@ -1,4 +0,0 @@
|
||||
from .video_archiver import VideoArchiver
|
||||
|
||||
async def setup(bot):
|
||||
await bot.add_cog(VideoArchiver(bot))
|
||||
@@ -1,270 +0,0 @@
|
||||
import os
|
||||
import sys
|
||||
import platform
|
||||
import subprocess
|
||||
import logging
|
||||
import shutil
|
||||
import requests
|
||||
import zipfile
|
||||
import tarfile
|
||||
from pathlib import Path
|
||||
import stat
|
||||
import multiprocessing
|
||||
import ffmpeg
|
||||
|
||||
logger = logging.getLogger('VideoArchiver')
|
||||
|
||||
class FFmpegManager:
|
||||
FFMPEG_URLS = {
|
||||
'Windows': {
|
||||
'x86_64': {
|
||||
'url': 'https://github.com/BtbN/FFmpeg-Builds/releases/download/latest/ffmpeg-master-latest-win64-gpl.zip',
|
||||
'bin_name': 'ffmpeg.exe'
|
||||
}
|
||||
},
|
||||
'Linux': {
|
||||
'x86_64': {
|
||||
'url': 'https://github.com/BtbN/FFmpeg-Builds/releases/download/latest/ffmpeg-master-latest-linux64-gpl.tar.xz',
|
||||
'bin_name': 'ffmpeg'
|
||||
},
|
||||
'aarch64': { # ARM64
|
||||
'url': 'https://github.com/BtbN/FFmpeg-Builds/releases/download/latest/ffmpeg-master-latest-linuxarm64-gpl.tar.xz',
|
||||
'bin_name': 'ffmpeg'
|
||||
},
|
||||
'armv7l': { # ARM32
|
||||
'url': 'https://github.com/BtbN/FFmpeg-Builds/releases/download/latest/ffmpeg-master-latest-linuxarm32-gpl.tar.xz',
|
||||
'bin_name': 'ffmpeg'
|
||||
}
|
||||
},
|
||||
'Darwin': { # macOS
|
||||
'x86_64': {
|
||||
'url': 'https://evermeet.cx/ffmpeg/getrelease/zip',
|
||||
'bin_name': 'ffmpeg'
|
||||
},
|
||||
'arm64': { # Apple Silicon
|
||||
'url': 'https://evermeet.cx/ffmpeg/getrelease/zip',
|
||||
'bin_name': 'ffmpeg'
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
def __init__(self):
|
||||
self.base_path = Path(__file__).parent / 'bin'
|
||||
self.base_path.mkdir(exist_ok=True)
|
||||
|
||||
# Get system architecture
|
||||
self.system = platform.system()
|
||||
self.machine = platform.machine().lower()
|
||||
if self.machine == 'arm64':
|
||||
self.machine = 'aarch64' # Normalize ARM64 naming
|
||||
|
||||
# Try to use system FFmpeg first
|
||||
system_ffmpeg = shutil.which('ffmpeg')
|
||||
if system_ffmpeg:
|
||||
self.ffmpeg_path = Path(system_ffmpeg)
|
||||
logger.info(f"Using system FFmpeg: {self.ffmpeg_path}")
|
||||
else:
|
||||
# Fall back to downloaded FFmpeg
|
||||
try:
|
||||
arch_config = self.FFMPEG_URLS[self.system][self.machine]
|
||||
self.ffmpeg_path = self.base_path / arch_config['bin_name']
|
||||
except KeyError:
|
||||
raise Exception(f"Unsupported system/architecture: {self.system}/{self.machine}")
|
||||
|
||||
self._gpu_info = self._detect_gpu()
|
||||
self._cpu_cores = multiprocessing.cpu_count()
|
||||
|
||||
if not system_ffmpeg:
|
||||
self._ensure_ffmpeg()
|
||||
|
||||
def _detect_gpu(self) -> dict:
|
||||
"""Detect available GPU and its capabilities"""
|
||||
gpu_info = {
|
||||
'nvidia': False,
|
||||
'amd': False,
|
||||
'intel': False,
|
||||
'arm': False
|
||||
}
|
||||
|
||||
try:
|
||||
if self.system == 'Linux':
|
||||
# Check for NVIDIA GPU
|
||||
nvidia_smi = subprocess.run(['nvidia-smi'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
if nvidia_smi.returncode == 0:
|
||||
gpu_info['nvidia'] = True
|
||||
|
||||
# Check for AMD GPU
|
||||
if os.path.exists('/dev/dri/renderD128'):
|
||||
gpu_info['amd'] = True
|
||||
|
||||
# Check for Intel GPU
|
||||
lspci = subprocess.run(['lspci'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
if b'VGA' in lspci.stdout and b'Intel' in lspci.stdout:
|
||||
gpu_info['intel'] = True
|
||||
|
||||
# Check for ARM GPU
|
||||
if self.machine in ['aarch64', 'armv7l']:
|
||||
gpu_info['arm'] = True
|
||||
|
||||
elif self.system == 'Windows':
|
||||
# Check for any GPU using dxdiag
|
||||
dxdiag = subprocess.run(['dxdiag', '/t', 'temp_dxdiag.txt'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
if os.path.exists('temp_dxdiag.txt'):
|
||||
with open('temp_dxdiag.txt', 'r') as f:
|
||||
content = f.read().lower()
|
||||
if 'nvidia' in content:
|
||||
gpu_info['nvidia'] = True
|
||||
if 'amd' in content or 'radeon' in content:
|
||||
gpu_info['amd'] = True
|
||||
if 'intel' in content:
|
||||
gpu_info['intel'] = True
|
||||
os.remove('temp_dxdiag.txt')
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f"GPU detection failed: {str(e)}")
|
||||
|
||||
return gpu_info
|
||||
|
||||
def _get_optimal_ffmpeg_params(self, input_path: str, target_size_bytes: int) -> dict:
|
||||
"""Get optimal FFmpeg parameters based on hardware and video size"""
|
||||
params = {
|
||||
'c:v': 'libx264', # Default to CPU encoding
|
||||
'threads': str(self._cpu_cores), # Use all CPU cores
|
||||
'preset': 'medium',
|
||||
'crf': '23', # Default quality
|
||||
'maxrate': None,
|
||||
'bufsize': None,
|
||||
'movflags': '+faststart', # Optimize for web playback
|
||||
'profile:v': 'high', # High profile for better quality
|
||||
'level': '4.1', # Compatibility level
|
||||
'pix_fmt': 'yuv420p' # Standard pixel format
|
||||
}
|
||||
|
||||
# Check if GPU encoding is possible
|
||||
if self._gpu_info['nvidia']:
|
||||
params.update({
|
||||
'c:v': 'h264_nvenc',
|
||||
'preset': 'p4', # High quality NVENC preset
|
||||
'rc:v': 'vbr', # Variable bitrate for better quality
|
||||
'cq:v': '19', # Quality level for NVENC
|
||||
'spatial-aq': '1', # Enable spatial adaptive quantization
|
||||
'temporal-aq': '1', # Enable temporal adaptive quantization
|
||||
'b_ref_mode': 'middle' # Better quality for B-frames
|
||||
})
|
||||
elif self._gpu_info['amd']:
|
||||
params.update({
|
||||
'c:v': 'h264_amf',
|
||||
'quality': 'quality',
|
||||
'rc': 'vbr_peak',
|
||||
'enforce_hrd': '1',
|
||||
'vbaq': '1', # Enable adaptive quantization
|
||||
'preanalysis': '1'
|
||||
})
|
||||
elif self._gpu_info['intel']:
|
||||
params.update({
|
||||
'c:v': 'h264_qsv',
|
||||
'preset': 'veryslow', # Best quality for QSV
|
||||
'look_ahead': '1',
|
||||
'global_quality': '23'
|
||||
})
|
||||
elif self._gpu_info['arm']:
|
||||
# Use OpenMAX (OMX) on supported ARM devices
|
||||
if os.path.exists('/dev/video-codec'):
|
||||
params.update({
|
||||
'c:v': 'h264_v4l2m2m', # V4L2 M2M encoder
|
||||
'extra_hw_frames': '10'
|
||||
})
|
||||
else:
|
||||
# Fall back to optimized CPU encoding for ARM
|
||||
params.update({
|
||||
'c:v': 'libx264',
|
||||
'preset': 'medium',
|
||||
'tune': 'fastdecode'
|
||||
})
|
||||
|
||||
# Get input file size and probe info
|
||||
input_size = os.path.getsize(input_path)
|
||||
probe = ffmpeg.probe(input_path)
|
||||
duration = float(probe['format']['duration'])
|
||||
|
||||
# Only add bitrate constraints if compression is needed
|
||||
if input_size > target_size_bytes:
|
||||
# Calculate target bitrate (bits/second)
|
||||
target_bitrate = int((target_size_bytes * 8) / duration * 0.95) # 95% of target size
|
||||
|
||||
params['maxrate'] = f"{target_bitrate}"
|
||||
params['bufsize'] = f"{target_bitrate * 2}"
|
||||
|
||||
# Adjust quality settings based on compression ratio
|
||||
ratio = input_size / target_size_bytes
|
||||
if ratio > 4:
|
||||
params['crf'] = '28' if params['c:v'] == 'libx264' else '23'
|
||||
params['preset'] = 'faster'
|
||||
elif ratio > 2:
|
||||
params['crf'] = '26' if params['c:v'] == 'libx264' else '21'
|
||||
params['preset'] = 'medium'
|
||||
else:
|
||||
params['crf'] = '23' if params['c:v'] == 'libx264' else '19'
|
||||
params['preset'] = 'slow'
|
||||
|
||||
# Audio settings
|
||||
params.update({
|
||||
'c:a': 'aac',
|
||||
'b:a': '192k', # High quality audio
|
||||
'ar': '48000' # Standard sample rate
|
||||
})
|
||||
|
||||
return params
|
||||
|
||||
def _ensure_ffmpeg(self):
|
||||
"""Ensure FFmpeg is available, downloading if necessary"""
|
||||
if not self.ffmpeg_path.exists():
|
||||
self._download_ffmpeg()
|
||||
|
||||
# Make binary executable on Unix systems
|
||||
if self.system != 'Windows':
|
||||
self.ffmpeg_path.chmod(self.ffmpeg_path.stat().st_mode | stat.S_IEXEC)
|
||||
|
||||
def _download_ffmpeg(self):
|
||||
"""Download and extract FFmpeg binary"""
|
||||
try:
|
||||
arch_config = self.FFMPEG_URLS[self.system][self.machine]
|
||||
except KeyError:
|
||||
raise Exception(f"Unsupported system/architecture: {self.system}/{self.machine}")
|
||||
|
||||
url = arch_config['url']
|
||||
archive_path = self.base_path / f"ffmpeg_archive{'.zip' if self.system == 'Windows' else '.tar.xz'}"
|
||||
|
||||
# Download archive
|
||||
response = requests.get(url, stream=True)
|
||||
response.raise_for_status()
|
||||
with open(archive_path, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
f.write(chunk)
|
||||
|
||||
# Extract archive
|
||||
if self.system == 'Windows':
|
||||
with zipfile.ZipFile(archive_path, 'r') as zip_ref:
|
||||
ffmpeg_files = [f for f in zip_ref.namelist() if arch_config['bin_name'] in f]
|
||||
if ffmpeg_files:
|
||||
zip_ref.extract(ffmpeg_files[0], self.base_path)
|
||||
os.rename(self.base_path / ffmpeg_files[0], self.ffmpeg_path)
|
||||
else:
|
||||
with tarfile.open(archive_path, 'r:xz') as tar_ref:
|
||||
ffmpeg_files = [f for f in tar_ref.getnames() if arch_config['bin_name'] in f]
|
||||
if ffmpeg_files:
|
||||
tar_ref.extract(ffmpeg_files[0], self.base_path)
|
||||
os.rename(self.base_path / ffmpeg_files[0], self.ffmpeg_path)
|
||||
|
||||
# Cleanup
|
||||
archive_path.unlink()
|
||||
|
||||
def get_ffmpeg_path(self) -> str:
|
||||
"""Get path to FFmpeg binary"""
|
||||
if not self.ffmpeg_path.exists():
|
||||
raise Exception("FFmpeg is not available")
|
||||
return str(self.ffmpeg_path)
|
||||
|
||||
def get_compression_params(self, input_path: str, target_size_mb: int) -> dict:
|
||||
"""Get optimal compression parameters for the given input file"""
|
||||
return self._get_optimal_ffmpeg_params(input_path, target_size_mb * 1024 * 1024)
|
||||
@@ -1,22 +0,0 @@
|
||||
{
|
||||
"name": "VideoArchiver",
|
||||
"author": ["Cline"],
|
||||
"description": "A powerful Discord video archiver cog that automatically downloads and reposts videos from monitored channels. Features include:\n- GPU-accelerated video compression (NVIDIA, AMD, Intel)\n- Multi-core CPU utilization\n- Concurrent multi-video processing\n- Intelligent quality preservation\n- Support for multiple video sites\n- Customizable archive messages\n- Automatic cleanup",
|
||||
"short": "Archive videos from Discord channels with GPU-accelerated compression",
|
||||
"tags": [
|
||||
"video",
|
||||
"archive",
|
||||
"download",
|
||||
"compression",
|
||||
"media"
|
||||
],
|
||||
"requirements": [
|
||||
"yt-dlp>=2023.12.30",
|
||||
"ffmpeg-python>=0.2.0",
|
||||
"requests>=2.31.0"
|
||||
],
|
||||
"min_bot_version": "3.5.0",
|
||||
"hidden": false,
|
||||
"disabled": false,
|
||||
"type": "COG"
|
||||
}
|
||||
@@ -1,205 +0,0 @@
|
||||
import os
|
||||
import shutil
|
||||
import logging
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional, Tuple, Set
|
||||
import yt_dlp
|
||||
import ffmpeg
|
||||
from datetime import datetime, timedelta
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from .ffmpeg_manager import FFmpegManager
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
|
||||
)
|
||||
logger = logging.getLogger("VideoArchiver")
|
||||
|
||||
# Initialize FFmpeg manager
|
||||
ffmpeg_mgr = FFmpegManager()
|
||||
|
||||
# Global thread pool for concurrent downloads
|
||||
download_pool = ThreadPoolExecutor(max_workers=3)
|
||||
|
||||
class VideoDownloader:
|
||||
def __init__(self, download_path: str, video_format: str, max_quality: int, max_file_size: int, enabled_sites: Optional[List[str]] = None):
|
||||
self.download_path = download_path
|
||||
self.video_format = video_format
|
||||
self.max_quality = max_quality
|
||||
self.max_file_size = max_file_size
|
||||
self.enabled_sites = enabled_sites
|
||||
self.url_patterns = self._get_url_patterns()
|
||||
|
||||
# Configure yt-dlp options
|
||||
self.ydl_opts = {
|
||||
'format': f'bestvideo[height<={max_quality}]+bestaudio/best[height<={max_quality}]',
|
||||
'outtmpl': os.path.join(download_path, '%(title)s.%(ext)s'),
|
||||
'merge_output_format': video_format,
|
||||
'quiet': True,
|
||||
'no_warnings': True,
|
||||
'extract_flat': False,
|
||||
'concurrent_fragment_downloads': 3,
|
||||
'postprocessor_hooks': [self._check_file_size],
|
||||
'progress_hooks': [self._progress_hook],
|
||||
'ffmpeg_location': ffmpeg_mgr.get_ffmpeg_path(),
|
||||
}
|
||||
|
||||
def _get_url_patterns(self) -> List[str]:
|
||||
"""Get URL patterns for supported sites"""
|
||||
patterns = []
|
||||
with yt_dlp.YoutubeDL() as ydl:
|
||||
for extractor in ydl._ies:
|
||||
if hasattr(extractor, '_VALID_URL') and extractor._VALID_URL:
|
||||
if not self.enabled_sites or any(site.lower() in extractor.IE_NAME.lower() for site in self.enabled_sites):
|
||||
patterns.append(extractor._VALID_URL)
|
||||
return patterns
|
||||
|
||||
def _check_file_size(self, info):
|
||||
"""Check if file size is within limits"""
|
||||
if info.get('filepath') and os.path.exists(info['filepath']):
|
||||
size = os.path.getsize(info['filepath'])
|
||||
if size > (self.max_file_size * 1024 * 1024):
|
||||
logger.info(f"File exceeds size limit, will compress: {info['filepath']}")
|
||||
|
||||
def _progress_hook(self, d):
|
||||
"""Handle download progress"""
|
||||
if d['status'] == 'finished':
|
||||
logger.info(f"Download completed: {d['filename']}")
|
||||
|
||||
async def download_video(self, url: str) -> Tuple[bool, str, str]:
|
||||
"""Download and process a video"""
|
||||
try:
|
||||
# Configure yt-dlp for this download
|
||||
ydl_opts = self.ydl_opts.copy()
|
||||
|
||||
with yt_dlp.YoutubeDL(ydl_opts) as ydl:
|
||||
# Run download in executor to prevent blocking
|
||||
info = await asyncio.get_event_loop().run_in_executor(
|
||||
download_pool, lambda: ydl.extract_info(url, download=True)
|
||||
)
|
||||
|
||||
if info is None:
|
||||
return False, "", "Failed to extract video information"
|
||||
|
||||
file_path = os.path.join(self.download_path, ydl.prepare_filename(info))
|
||||
|
||||
if not os.path.exists(file_path):
|
||||
return False, "", "Download completed but file not found"
|
||||
|
||||
# Check file size and compress if needed
|
||||
file_size = os.path.getsize(file_path)
|
||||
if file_size > (self.max_file_size * 1024 * 1024):
|
||||
logger.info(f"Compressing video: {file_path}")
|
||||
try:
|
||||
# Get optimal compression parameters
|
||||
params = ffmpeg_mgr.get_compression_params(file_path, self.max_file_size)
|
||||
output_path = file_path + ".compressed." + self.video_format
|
||||
|
||||
# Configure ffmpeg with optimal parameters
|
||||
stream = ffmpeg.input(file_path)
|
||||
stream = ffmpeg.output(stream, output_path, **params)
|
||||
|
||||
# Run compression in executor
|
||||
await asyncio.get_event_loop().run_in_executor(
|
||||
None,
|
||||
lambda: ffmpeg.run(
|
||||
stream,
|
||||
capture_stdout=True,
|
||||
capture_stderr=True,
|
||||
overwrite_output=True,
|
||||
),
|
||||
)
|
||||
|
||||
if os.path.exists(output_path):
|
||||
compressed_size = os.path.getsize(output_path)
|
||||
if compressed_size <= (self.max_file_size * 1024 * 1024):
|
||||
os.remove(file_path) # Remove original
|
||||
return True, output_path, ""
|
||||
else:
|
||||
os.remove(output_path)
|
||||
return False, "", "Failed to compress to target size"
|
||||
except Exception as e:
|
||||
logger.error(f"Compression error: {str(e)}")
|
||||
return False, "", f"Compression error: {str(e)}"
|
||||
|
||||
return True, file_path, ""
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Download error: {str(e)}")
|
||||
return False, "", str(e)
|
||||
|
||||
def is_supported_url(self, url: str) -> bool:
|
||||
"""Check if URL is supported"""
|
||||
try:
|
||||
with yt_dlp.YoutubeDL() as ydl:
|
||||
# Try to extract info without downloading
|
||||
ie = ydl.extract_info(url, download=False, process=False)
|
||||
return ie is not None
|
||||
except:
|
||||
return False
|
||||
|
||||
|
||||
class MessageManager:
|
||||
def __init__(self, message_duration: int, message_template: str):
|
||||
self.message_duration = message_duration
|
||||
self.message_template = message_template
|
||||
self.scheduled_deletions: Dict[int, asyncio.Task] = {}
|
||||
|
||||
def format_archive_message(self, author: str, url: str, original_message: str) -> str:
|
||||
return self.message_template.format(
|
||||
author=author, url=url, original_message=original_message
|
||||
)
|
||||
|
||||
async def schedule_message_deletion(self, message_id: int, delete_func) -> None:
|
||||
if self.message_duration <= 0:
|
||||
return
|
||||
|
||||
if message_id in self.scheduled_deletions:
|
||||
self.scheduled_deletions[message_id].cancel()
|
||||
|
||||
async def delete_later():
|
||||
await asyncio.sleep(self.message_duration * 3600) # Convert hours to seconds
|
||||
try:
|
||||
await delete_func()
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to delete message {message_id}: {str(e)}")
|
||||
finally:
|
||||
self.scheduled_deletions.pop(message_id, None)
|
||||
|
||||
self.scheduled_deletions[message_id] = asyncio.create_task(delete_later())
|
||||
|
||||
|
||||
def secure_delete_file(file_path: str, passes: int = 3) -> bool:
|
||||
if not os.path.exists(file_path):
|
||||
return True
|
||||
|
||||
try:
|
||||
file_size = os.path.getsize(file_path)
|
||||
for _ in range(passes):
|
||||
with open(file_path, "wb") as f:
|
||||
f.write(os.urandom(file_size))
|
||||
f.flush()
|
||||
os.fsync(f.fileno())
|
||||
|
||||
os.remove(file_path)
|
||||
|
||||
if os.path.exists(file_path) or Path(file_path).exists():
|
||||
os.unlink(file_path)
|
||||
|
||||
return not (os.path.exists(file_path) or Path(file_path).exists())
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error during secure delete: {str(e)}")
|
||||
return False
|
||||
|
||||
|
||||
def cleanup_downloads(download_path: str) -> None:
|
||||
try:
|
||||
if os.path.exists(download_path):
|
||||
for file_path in Path(download_path).glob("*"):
|
||||
secure_delete_file(str(file_path))
|
||||
|
||||
shutil.rmtree(download_path, ignore_errors=True)
|
||||
Path(download_path).mkdir(parents=True, exist_ok=True)
|
||||
except Exception as e:
|
||||
logger.error(f"Error during cleanup: {str(e)}")
|
||||
@@ -1,494 +0,0 @@
|
||||
import os
|
||||
import re
|
||||
import discord
|
||||
from redbot.core import commands, Config
|
||||
from redbot.core.bot import Red
|
||||
from redbot.core.utils.chat_formatting import box
|
||||
from discord import app_commands
|
||||
import logging
|
||||
from pathlib import Path
|
||||
import yt_dlp
|
||||
import shutil
|
||||
import asyncio
|
||||
from typing import Optional, List, Set, Dict
|
||||
import sys
|
||||
|
||||
# Add cog directory to path for local imports
|
||||
cog_path = Path(__file__).parent
|
||||
if str(cog_path) not in sys.path:
|
||||
sys.path.append(str(cog_path))
|
||||
|
||||
# Import local utils
|
||||
from utils import VideoDownloader, secure_delete_file, cleanup_downloads, MessageManager
|
||||
from ffmpeg_manager import FFmpegManager
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger('VideoArchiver')
|
||||
|
||||
class VideoArchiver(commands.Cog):
|
||||
"""Archive videos from Discord channels"""
|
||||
|
||||
default_guild = {
|
||||
"archive_channel": None,
|
||||
"notification_channel": None,
|
||||
"log_channel": None, # Added log channel
|
||||
"monitored_channels": [],
|
||||
"allowed_roles": [], # Added role management
|
||||
"video_format": "mp4",
|
||||
"video_quality": 1080,
|
||||
"max_file_size": 8, # Changed to 8MB default
|
||||
"delete_after_repost": True,
|
||||
"message_duration": 24,
|
||||
"message_template": "Archived video from {author}\nOriginal: {original_message}",
|
||||
"enabled_sites": [],
|
||||
"concurrent_downloads": 3
|
||||
}
|
||||
|
||||
def __init__(self, bot: Red):
|
||||
self.bot = bot
|
||||
self.config = Config.get_conf(self, identifier=855847, force_registration=True)
|
||||
self.config.register_guild(**self.default_guild)
|
||||
|
||||
# Initialize components dict for each guild
|
||||
self.components = {}
|
||||
self.download_path = Path(cog_path) / "downloads"
|
||||
self.download_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Clean up downloads on load
|
||||
cleanup_downloads(str(self.download_path))
|
||||
|
||||
# Initialize FFmpeg manager
|
||||
self.ffmpeg_mgr = FFmpegManager()
|
||||
|
||||
def cog_unload(self):
|
||||
"""Cleanup when cog is unloaded"""
|
||||
if self.download_path.exists():
|
||||
shutil.rmtree(self.download_path, ignore_errors=True)
|
||||
|
||||
async def initialize_guild_components(self, guild_id: int):
|
||||
"""Initialize or update components for a guild"""
|
||||
settings = await self.config.guild_from_id(guild_id).all()
|
||||
|
||||
self.components[guild_id] = {
|
||||
'downloader': VideoDownloader(
|
||||
str(self.download_path),
|
||||
settings['video_format'],
|
||||
settings['video_quality'],
|
||||
settings['max_file_size'],
|
||||
settings['enabled_sites'] if settings['enabled_sites'] else None
|
||||
),
|
||||
'message_manager': MessageManager(
|
||||
settings['message_duration'],
|
||||
settings['message_template']
|
||||
)
|
||||
}
|
||||
|
||||
def _check_user_roles(self, member: discord.Member, allowed_roles: List[int]) -> bool:
|
||||
"""Check if user has permission to trigger archiving"""
|
||||
# If no roles are set, allow all users
|
||||
if not allowed_roles:
|
||||
return True
|
||||
|
||||
# Check if user has any of the allowed roles
|
||||
return any(role.id in allowed_roles for role in member.roles)
|
||||
|
||||
async def log_message(self, guild: discord.Guild, message: str, level: str = "info"):
|
||||
"""Send a log message to the guild's log channel if set"""
|
||||
settings = await self.config.guild(guild).all()
|
||||
if settings["log_channel"]:
|
||||
try:
|
||||
log_channel = guild.get_channel(settings["log_channel"])
|
||||
if log_channel:
|
||||
await log_channel.send(f"[{level.upper()}] {message}")
|
||||
except discord.HTTPException:
|
||||
logger.error(f"Failed to send log message to channel: {message}")
|
||||
logger.log(getattr(logging, level.upper()), message)
|
||||
|
||||
@commands.hybrid_group(name="videoarchiver", aliases=["va"])
|
||||
@commands.guild_only()
|
||||
@commands.admin_or_permissions(administrator=True)
|
||||
async def videoarchiver(self, ctx: commands.Context):
|
||||
"""Video Archiver configuration commands"""
|
||||
if ctx.invoked_subcommand is None:
|
||||
settings = await self.config.guild(ctx.guild).all()
|
||||
embed = discord.Embed(
|
||||
title="Video Archiver Settings",
|
||||
color=discord.Color.blue()
|
||||
)
|
||||
|
||||
archive_channel = ctx.guild.get_channel(settings["archive_channel"]) if settings["archive_channel"] else None
|
||||
notification_channel = ctx.guild.get_channel(settings["notification_channel"]) if settings["notification_channel"] else None
|
||||
log_channel = ctx.guild.get_channel(settings["log_channel"]) if settings["log_channel"] else None
|
||||
monitored_channels = [ctx.guild.get_channel(c) for c in settings["monitored_channels"]]
|
||||
monitored_channels = [c.mention for c in monitored_channels if c]
|
||||
allowed_roles = [ctx.guild.get_role(r) for r in settings["allowed_roles"]]
|
||||
allowed_roles = [r.name for r in allowed_roles if r]
|
||||
|
||||
embed.add_field(
|
||||
name="Archive Channel",
|
||||
value=archive_channel.mention if archive_channel else "Not set",
|
||||
inline=False
|
||||
)
|
||||
embed.add_field(
|
||||
name="Notification Channel",
|
||||
value=notification_channel.mention if notification_channel else "Same as archive",
|
||||
inline=False
|
||||
)
|
||||
embed.add_field(
|
||||
name="Log Channel",
|
||||
value=log_channel.mention if log_channel else "Not set",
|
||||
inline=False
|
||||
)
|
||||
embed.add_field(
|
||||
name="Monitored Channels",
|
||||
value="\n".join(monitored_channels) if monitored_channels else "None",
|
||||
inline=False
|
||||
)
|
||||
embed.add_field(
|
||||
name="Allowed Roles",
|
||||
value=", ".join(allowed_roles) if allowed_roles else "All roles (no restrictions)",
|
||||
inline=False
|
||||
)
|
||||
embed.add_field(name="Video Format", value=settings["video_format"], inline=True)
|
||||
embed.add_field(name="Max Quality", value=f"{settings['video_quality']}p", inline=True)
|
||||
embed.add_field(name="Max File Size", value=f"{settings['max_file_size']}MB", inline=True)
|
||||
embed.add_field(name="Delete After Repost", value=str(settings["delete_after_repost"]), inline=True)
|
||||
embed.add_field(name="Message Duration", value=f"{settings['message_duration']} hours", inline=True)
|
||||
embed.add_field(name="Concurrent Downloads", value=str(settings["concurrent_downloads"]), inline=True)
|
||||
embed.add_field(
|
||||
name="Enabled Sites",
|
||||
value=", ".join(settings["enabled_sites"]) if settings["enabled_sites"] else "All sites",
|
||||
inline=False
|
||||
)
|
||||
|
||||
# Add hardware info
|
||||
gpu_info = self.ffmpeg_mgr._gpu_info
|
||||
cpu_cores = self.ffmpeg_mgr._cpu_cores
|
||||
|
||||
hardware_info = f"CPU Cores: {cpu_cores}\n"
|
||||
if gpu_info['nvidia']:
|
||||
hardware_info += "NVIDIA GPU: Available (using NVENC)\n"
|
||||
if gpu_info['amd']:
|
||||
hardware_info += "AMD GPU: Available (using AMF)\n"
|
||||
if gpu_info['intel']:
|
||||
hardware_info += "Intel GPU: Available (using QSV)\n"
|
||||
if not any(gpu_info.values()):
|
||||
hardware_info += "No GPU acceleration available (using CPU)\n"
|
||||
|
||||
embed.add_field(name="Hardware Info", value=hardware_info, inline=False)
|
||||
|
||||
await ctx.send(embed=embed)
|
||||
|
||||
@videoarchiver.command(name="addrole")
|
||||
async def add_allowed_role(self, ctx: commands.Context, role: discord.Role):
|
||||
"""Add a role that's allowed to trigger archiving"""
|
||||
async with self.config.guild(ctx.guild).allowed_roles() as roles:
|
||||
if role.id not in roles:
|
||||
roles.append(role.id)
|
||||
await ctx.send(f"Added {role.name} to allowed roles")
|
||||
await self.log_message(ctx.guild, f"Added role {role.name} ({role.id}) to allowed roles")
|
||||
await self.initialize_guild_components(ctx.guild.id)
|
||||
|
||||
@videoarchiver.command(name="removerole")
|
||||
async def remove_allowed_role(self, ctx: commands.Context, role: discord.Role):
|
||||
"""Remove a role from allowed roles"""
|
||||
async with self.config.guild(ctx.guild).allowed_roles() as roles:
|
||||
if role.id in roles:
|
||||
roles.remove(role.id)
|
||||
await ctx.send(f"Removed {role.name} from allowed roles")
|
||||
await self.log_message(ctx.guild, f"Removed role {role.name} ({role.id}) from allowed roles")
|
||||
await self.initialize_guild_components(ctx.guild.id)
|
||||
|
||||
@videoarchiver.command(name="listroles")
|
||||
async def list_allowed_roles(self, ctx: commands.Context):
|
||||
"""List all roles allowed to trigger archiving"""
|
||||
roles = await self.config.guild(ctx.guild).allowed_roles()
|
||||
if not roles:
|
||||
await ctx.send("No roles are currently allowed (all users can trigger archiving)")
|
||||
return
|
||||
|
||||
role_names = [r.name for r in [ctx.guild.get_role(role_id) for role_id in roles] if r]
|
||||
await ctx.send(f"Allowed roles: {', '.join(role_names)}")
|
||||
|
||||
@videoarchiver.command(name="setconcurrent")
|
||||
async def set_concurrent_downloads(self, ctx: commands.Context, count: int):
|
||||
"""Set the number of concurrent downloads (1-5)"""
|
||||
if not 1 <= count <= 5:
|
||||
await ctx.send("Concurrent downloads must be between 1 and 5")
|
||||
return
|
||||
|
||||
await self.config.guild(ctx.guild).concurrent_downloads.set(count)
|
||||
await ctx.send(f"Concurrent downloads set to {count}")
|
||||
await self.log_message(ctx.guild, f"Concurrent downloads set to {count}")
|
||||
await self.initialize_guild_components(ctx.guild.id)
|
||||
|
||||
@videoarchiver.command(name="setchannel")
|
||||
async def set_archive_channel(self, ctx: commands.Context, channel: discord.TextChannel):
|
||||
"""Set the archive channel"""
|
||||
await self.config.guild(ctx.guild).archive_channel.set(channel.id)
|
||||
await ctx.send(f"Archive channel set to {channel.mention}")
|
||||
await self.log_message(ctx.guild, f"Archive channel set to {channel.name} ({channel.id})")
|
||||
await self.initialize_guild_components(ctx.guild.id)
|
||||
|
||||
@videoarchiver.command(name="setnotification")
|
||||
async def set_notification_channel(self, ctx: commands.Context, channel: discord.TextChannel):
|
||||
"""Set the notification channel (where archive messages appear)"""
|
||||
await self.config.guild(ctx.guild).notification_channel.set(channel.id)
|
||||
await ctx.send(f"Notification channel set to {channel.mention}")
|
||||
await self.log_message(ctx.guild, f"Notification channel set to {channel.name} ({channel.id})")
|
||||
await self.initialize_guild_components(ctx.guild.id)
|
||||
|
||||
@videoarchiver.command(name="setlogchannel")
|
||||
async def set_log_channel(self, ctx: commands.Context, channel: discord.TextChannel):
|
||||
"""Set the log channel for error messages and notifications"""
|
||||
await self.config.guild(ctx.guild).log_channel.set(channel.id)
|
||||
await ctx.send(f"Log channel set to {channel.mention}")
|
||||
await self.log_message(ctx.guild, f"Log channel set to {channel.name} ({channel.id})")
|
||||
await self.initialize_guild_components(ctx.guild.id)
|
||||
|
||||
@videoarchiver.command(name="addmonitor")
|
||||
async def add_monitored_channel(self, ctx: commands.Context, channel: discord.TextChannel):
|
||||
"""Add a channel to monitor for videos"""
|
||||
async with self.config.guild(ctx.guild).monitored_channels() as channels:
|
||||
if channel.id not in channels:
|
||||
channels.append(channel.id)
|
||||
await ctx.send(f"Now monitoring {channel.mention} for videos")
|
||||
await self.log_message(ctx.guild, f"Added {channel.name} ({channel.id}) to monitored channels")
|
||||
await self.initialize_guild_components(ctx.guild.id)
|
||||
|
||||
@videoarchiver.command(name="removemonitor")
|
||||
async def remove_monitored_channel(self, ctx: commands.Context, channel: discord.TextChannel):
|
||||
"""Remove a channel from monitoring"""
|
||||
async with self.config.guild(ctx.guild).monitored_channels() as channels:
|
||||
if channel.id in channels:
|
||||
channels.remove(channel.id)
|
||||
await ctx.send(f"Stopped monitoring {channel.mention}")
|
||||
await self.log_message(ctx.guild, f"Removed {channel.name} ({channel.id}) from monitored channels")
|
||||
await self.initialize_guild_components(ctx.guild.id)
|
||||
|
||||
@videoarchiver.command(name="setformat")
|
||||
async def set_video_format(self, ctx: commands.Context, format: str):
|
||||
"""Set the video format (e.g., mp4, webm)"""
|
||||
await self.config.guild(ctx.guild).video_format.set(format.lower())
|
||||
await ctx.send(f"Video format set to {format.lower()}")
|
||||
await self.log_message(ctx.guild, f"Video format set to {format.lower()}")
|
||||
await self.initialize_guild_components(ctx.guild.id)
|
||||
|
||||
@videoarchiver.command(name="setquality")
|
||||
async def set_video_quality(self, ctx: commands.Context, quality: int):
|
||||
"""Set the maximum video quality in pixels (e.g., 1080)"""
|
||||
await self.config.guild(ctx.guild).video_quality.set(quality)
|
||||
await ctx.send(f"Maximum video quality set to {quality}p")
|
||||
await self.log_message(ctx.guild, f"Video quality set to {quality}p")
|
||||
await self.initialize_guild_components(ctx.guild.id)
|
||||
|
||||
@videoarchiver.command(name="setmaxsize")
|
||||
async def set_max_file_size(self, ctx: commands.Context, size: int):
|
||||
"""Set the maximum file size in MB"""
|
||||
await self.config.guild(ctx.guild).max_file_size.set(size)
|
||||
await ctx.send(f"Maximum file size set to {size}MB")
|
||||
await self.log_message(ctx.guild, f"Maximum file size set to {size}MB")
|
||||
await self.initialize_guild_components(ctx.guild.id)
|
||||
|
||||
@videoarchiver.command(name="toggledelete")
|
||||
async def toggle_delete_after_repost(self, ctx: commands.Context):
|
||||
"""Toggle whether to delete local files after reposting"""
|
||||
current = await self.config.guild(ctx.guild).delete_after_repost()
|
||||
await self.config.guild(ctx.guild).delete_after_repost.set(not current)
|
||||
await ctx.send(f"Delete after repost: {not current}")
|
||||
await self.log_message(ctx.guild, f"Delete after repost set to: {not current}")
|
||||
|
||||
@videoarchiver.command(name="setduration")
|
||||
async def set_message_duration(self, ctx: commands.Context, hours: int):
|
||||
"""Set how long to keep archive messages (0 for permanent)"""
|
||||
await self.config.guild(ctx.guild).message_duration.set(hours)
|
||||
await ctx.send(f"Archive message duration set to {hours} hours")
|
||||
await self.log_message(ctx.guild, f"Message duration set to {hours} hours")
|
||||
await self.initialize_guild_components(ctx.guild.id)
|
||||
|
||||
@videoarchiver.command(name="settemplate")
|
||||
async def set_message_template(self, ctx: commands.Context, *, template: str):
|
||||
"""Set the archive message template. Use {author}, {url}, and {original_message} as placeholders"""
|
||||
await self.config.guild(ctx.guild).message_template.set(template)
|
||||
await ctx.send(f"Archive message template set to:\n{template}")
|
||||
await self.log_message(ctx.guild, f"Message template updated")
|
||||
await self.initialize_guild_components(ctx.guild.id)
|
||||
|
||||
@videoarchiver.command(name="enablesites")
|
||||
async def enable_sites(self, ctx: commands.Context, *sites: str):
|
||||
"""Enable specific sites (leave empty for all sites)"""
|
||||
sites = [s.lower() for s in sites]
|
||||
if not sites:
|
||||
await self.config.guild(ctx.guild).enabled_sites.set([])
|
||||
await ctx.send("All sites enabled")
|
||||
else:
|
||||
# Verify sites are valid
|
||||
with yt_dlp.YoutubeDL() as ydl:
|
||||
valid_sites = set(ie.IE_NAME.lower() for ie in ydl._ies)
|
||||
invalid_sites = [s for s in sites if s not in valid_sites]
|
||||
if invalid_sites:
|
||||
await ctx.send(f"Invalid sites: {', '.join(invalid_sites)}\nValid sites: {', '.join(valid_sites)}")
|
||||
return
|
||||
|
||||
await self.config.guild(ctx.guild).enabled_sites.set(sites)
|
||||
await ctx.send(f"Enabled sites: {', '.join(sites)}")
|
||||
|
||||
await self.log_message(ctx.guild, f"Enabled sites updated: {', '.join(sites) if sites else 'All sites'}")
|
||||
await self.initialize_guild_components(ctx.guild.id)
|
||||
|
||||
@videoarchiver.command(name="listsites")
|
||||
async def list_sites(self, ctx: commands.Context):
|
||||
"""List all available sites and currently enabled sites"""
|
||||
settings = await self.config.guild(ctx.guild).all()
|
||||
enabled_sites = settings["enabled_sites"]
|
||||
|
||||
embed = discord.Embed(
|
||||
title="Video Sites Configuration",
|
||||
color=discord.Color.blue()
|
||||
)
|
||||
|
||||
with yt_dlp.YoutubeDL() as ydl:
|
||||
all_sites = sorted(ie.IE_NAME for ie in ydl._ies if ie.IE_NAME is not None)
|
||||
|
||||
# Split sites into chunks for Discord's field value limit
|
||||
chunk_size = 20
|
||||
site_chunks = [all_sites[i:i + chunk_size] for i in range(0, len(all_sites), chunk_size)]
|
||||
|
||||
for i, chunk in enumerate(site_chunks, 1):
|
||||
embed.add_field(
|
||||
name=f"Available Sites ({i}/{len(site_chunks)})",
|
||||
value=", ".join(chunk),
|
||||
inline=False
|
||||
)
|
||||
|
||||
embed.add_field(
|
||||
name="Currently Enabled",
|
||||
value=", ".join(enabled_sites) if enabled_sites else "All sites",
|
||||
inline=False
|
||||
)
|
||||
|
||||
await ctx.send(embed=embed)
|
||||
|
||||
async def process_video_url(self, url: str, message: discord.Message) -> bool:
|
||||
"""Process a video URL: download, reupload, and cleanup"""
|
||||
guild_id = message.guild.id
|
||||
|
||||
# Initialize components if needed
|
||||
if guild_id not in self.components:
|
||||
await self.initialize_guild_components(guild_id)
|
||||
|
||||
try:
|
||||
await message.add_reaction('⏳')
|
||||
await self.log_message(message.guild, f"Processing video URL: {url}")
|
||||
|
||||
settings = await self.config.guild(message.guild).all()
|
||||
|
||||
# Check user roles
|
||||
if not self._check_user_roles(message.author, settings['allowed_roles']):
|
||||
await message.add_reaction('🚫')
|
||||
return False
|
||||
|
||||
# Download video
|
||||
success, file_path, error = await self.components[guild_id]['downloader'].download_video(url)
|
||||
|
||||
if not success:
|
||||
await message.add_reaction('❌')
|
||||
await self.log_message(message.guild, f"Failed to download video: {error}", "error")
|
||||
return False
|
||||
|
||||
# Get channels
|
||||
archive_channel = message.guild.get_channel(settings['archive_channel'])
|
||||
notification_channel = message.guild.get_channel(
|
||||
settings['notification_channel'] if settings['notification_channel']
|
||||
else settings['archive_channel']
|
||||
)
|
||||
|
||||
if not archive_channel or not notification_channel:
|
||||
await self.log_message(message.guild, "Required channels not found!", "error")
|
||||
return False
|
||||
|
||||
try:
|
||||
# Upload to archive channel
|
||||
file = discord.File(file_path)
|
||||
archive_message = await archive_channel.send(file=file)
|
||||
|
||||
# Send notification with information
|
||||
notification_message = await notification_channel.send(
|
||||
self.components[guild_id]['message_manager'].format_archive_message(
|
||||
author=message.author.mention,
|
||||
url=archive_message.attachments[0].url if archive_message.attachments else "No URL available",
|
||||
original_message=message.jump_url
|
||||
)
|
||||
)
|
||||
|
||||
# Schedule notification message deletion if needed
|
||||
await self.components[guild_id]['message_manager'].schedule_message_deletion(
|
||||
notification_message.id,
|
||||
notification_message.delete
|
||||
)
|
||||
|
||||
await message.add_reaction('✅')
|
||||
await self.log_message(message.guild, f"Successfully archived video from {message.author}")
|
||||
|
||||
except discord.HTTPException as e:
|
||||
await self.log_message(message.guild, f"Failed to upload video: {str(e)}", "error")
|
||||
await message.add_reaction('❌')
|
||||
return False
|
||||
|
||||
finally:
|
||||
# Always attempt to delete the file if configured
|
||||
if settings['delete_after_repost']:
|
||||
if secure_delete_file(file_path):
|
||||
await self.log_message(message.guild, f"Successfully deleted file: {file_path}")
|
||||
else:
|
||||
await self.log_message(message.guild, f"Failed to delete file: {file_path}", "error")
|
||||
# Emergency cleanup
|
||||
cleanup_downloads(str(self.download_path))
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
await self.log_message(message.guild, f"Error processing video: {str(e)}", "error")
|
||||
await message.add_reaction('❌')
|
||||
return False
|
||||
|
||||
@commands.Cog.listener()
|
||||
async def on_message(self, message: discord.Message):
|
||||
if message.author.bot or not message.guild:
|
||||
return
|
||||
|
||||
settings = await self.config.guild(message.guild).all()
|
||||
|
||||
# Check if message is in a monitored channel
|
||||
if message.channel.id not in settings['monitored_channels']:
|
||||
return
|
||||
|
||||
# Initialize components if needed
|
||||
if message.guild.id not in self.components:
|
||||
await self.initialize_guild_components(message.guild.id)
|
||||
|
||||
# Find all video URLs in message
|
||||
urls = []
|
||||
with yt_dlp.YoutubeDL() as ydl:
|
||||
for ie in ydl._ies:
|
||||
if ie._VALID_URL:
|
||||
urls.extend(re.findall(ie._VALID_URL, message.content))
|
||||
|
||||
if urls:
|
||||
# Process multiple URLs concurrently but limited
|
||||
tasks = []
|
||||
semaphore = asyncio.Semaphore(settings['concurrent_downloads'])
|
||||
|
||||
async def process_with_semaphore(url):
|
||||
async with semaphore:
|
||||
return await self.process_video_url(url, message)
|
||||
|
||||
for url in urls:
|
||||
tasks.append(asyncio.create_task(process_with_semaphore(url)))
|
||||
|
||||
# Wait for all downloads to complete
|
||||
await asyncio.gather(*tasks)
|
||||
Reference in New Issue
Block a user