Video Processor Service
A containerized Python service for processing video Edit Decision Lists (EDL) using FFmpeg.
Features
RESTful API for video processing jobs
EDL-based video editing with support for:
Clip trimming and concatenation
Volume and speed adjustments
Crop and color correction effects
Text overlays
Fade transitions
Asynchronous job processing
Support for Azure Blob Storage and AWS S3
Progress tracking and status updates
Containerized deployment with Docker
Project Structure
video-processor-service/
├── app.py # Main Flask application
├── app/
│ ├── __init__.py
│ ├── ffmpeg_generator.py # FFmpeg command generation
│ └── storage_handler.py # Cloud storage abstraction
├── config/
│ ├── __init__.py
│ └── settings.py # Configuration management
├── tests/ # Test files
├── requirements.txt # Python dependencies
├── Dockerfile # Container definition
├── env.example # Example environment variables
└── test_api.py # API test script
Setup
Local Development
Create a Python virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
Install dependencies:
pip install -r requirements.txt
Copy environment variables:
cp env.example .env # Edit .env with your configuration
Install FFmpeg:
macOS:
brew install ffmpeg
Ubuntu/Debian:
sudo apt-get install ffmpeg
Windows: Download from https://ffmpeg.org/download.html
Run the service:
python app.py
Docker
Build the Docker image:
docker build -t video-processor .
Run the container:
docker run -p 8000:8000 --env-file .env video-processor
API Endpoints
Health Check
GET /health
Process EDL
POST /process-edl
Content-Type: application/json
{
"projectId": "project-123",
"timeline": [...],
"outputSettings": {...}
}
Get Job Status
GET /status/{job_id}
Get All Jobs Status
GET /status
EDL Format
The service accepts an Edit Decision List (EDL) in JSON format:
{
"projectId": "unique-project-id",
"timeline": [
{
"clipId": "clip-123",
"startTime": 0,
"endTime": 10,
"sourceFile": "https://storage.example.com/video1.mp4",
"volume": 1.0,
"speed": 1.0,
"effects": [
{
"type": "crop",
"id": "effect-1",
"parameters": {
"x": 0,
"y": 0,
"width": 1920,
"height": 1080
}
}
]
}
],
"outputSettings": {
"format": "mp4",
"resolution": "1920x1080",
"framerate": 30,
"videoBitrate": "8M",
"audioBitrate": "192k",
"codec": {
"video": "libx264",
"audio": "aac"
}
}
}
Supported Effects
crop: Crop video to specified dimensions
colorCorrection: Adjust brightness, contrast, saturation, temperature
textOverlay: Add text overlays with customizable styling
fade: Fade in/out transitions
volume: Audio volume adjustment
speed: Playback speed modification
Testing
Run the test script to verify the API is working:
python test_api.py
Environment Variables
See env.example
for all available configuration options.
Key variables:
FLASK_ENV
: Set to 'development' or 'production'AZURE_STORAGE_CONNECTION_STRING
: For Azure Blob StorageAWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
: For AWS S3WEBHOOK_URL
: URL to receive processing updates
Production Deployment
For production deployment:
Use a production WSGI server (Gunicorn)
Set up a message queue (Redis/Celery) for job processing
Deploy to a container orchestration platform (Kubernetes, Azure Container Apps)
Configure proper monitoring and logging
Set up auto-scaling based on job queue length
Next Steps
Implement actual FFmpeg execution (currently simulated)
Add cloud storage integration
Implement webhook notifications
Add more video effects and transitions
Set up Celery for distributed job processing
Last updated