Skip to main content

Django-RQ Deployment Guide

Complete guide for deploying Django-RQ to production environments using Docker, Docker Compose, Kubernetes, or cloud platforms.


Quick Navigation


Architecture Overview

Django-RQ requires three main components for production:

Key Components
  1. Django API: Web server (1+ processes)
  2. RQ Workers: Background job processors (1+ per queue)
  3. RQ Scheduler: Cron-like scheduler (1 process)
  4. Redis: Message broker and result backend

Docker Compose Deployment

Production Setup

Complete docker-compose configuration for production:

docker-compose-production.yaml
version: '3.8'

services:
# Django API Server
django:
image: your-registry/django-app:latest
container_name: django-api
restart: unless-stopped
env_file: .env.prod
environment:
DJANGO_SETTINGS_MODULE: api.settings
command: gunicorn api.wsgi:application --bind 0.0.0.0:8000 --workers 4
ports:
- "8000:8000"
depends_on:
- redis
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/cfg/health/"]
interval: 30s
timeout: 10s
retries: 3

# RQ Worker - High Priority
rq-worker-high:
image: your-registry/django-app:latest
container_name: rq-worker-high
restart: unless-stopped
env_file: .env.prod
environment:
DJANGO_SETTINGS_MODULE: api.settings
command: python manage.py rqworker high default
depends_on:
- redis
- django
deploy:
replicas: 2 # Scale as needed
resources:
limits:
cpus: '1.0'
memory: 1G
healthcheck:
test: ["CMD-SHELL", "pgrep -f rqworker > /dev/null || exit 1"]
interval: 30s

# RQ Worker - Low Priority
rq-worker-low:
image: your-registry/django-app:latest
container_name: rq-worker-low
restart: unless-stopped
env_file: .env.prod
environment:
DJANGO_SETTINGS_MODULE: api.settings
command: python manage.py rqworker low knowledge
depends_on:
- redis
- django
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M

# RQ Scheduler
rq-scheduler:
image: your-registry/django-app:latest
container_name: rq-scheduler
restart: unless-stopped
env_file: .env.prod
environment:
DJANGO_SETTINGS_MODULE: api.settings
command: python manage.py rqscheduler
depends_on:
- redis
- django
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M

# Redis
redis:
image: redis:7-alpine
container_name: redis
restart: unless-stopped
command: >
redis-server
--maxmemory 512mb
--maxmemory-policy allkeys-lru
--appendonly yes
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s

volumes:
redis_data:

Start Production Services

# Start all services
docker compose -f docker-compose-production.yaml up -d

# Check status
docker compose -f docker-compose-production.yaml ps

Local Development

Option 1: Docker Services

Use separate services for development:

docker-compose-local-services.yml
services:
redis:
image: redis:7-alpine
ports:
- "6379:6379"
command: redis-server --maxmemory 256mb

rq-worker:
build: .
command: python manage.py rqworker default high low knowledge
env_file: .env.local
depends_on:
- redis
volumes:
- ./:/app

rq-scheduler:
build: .
command: python manage.py rqscheduler
env_file: .env.local
depends_on:
- redis
volumes:
- ./:/app
# Start services
docker compose -f docker-compose-local-services.yml up -d

# Run Django locally
python manage.py runserver

Option 2: Python Processes

Run everything locally for active development:

# Terminal 1: Django server
python manage.py runserver

# Terminal 2: RQ Worker
python manage.py rqworker default high low knowledge

# Terminal 3: RQ Scheduler (optional)
python manage.py rqscheduler
Hot Reload in Development

Workers automatically reload on code changes when using --with-scheduler flag:

python manage.py rqworker default --with-scheduler

Kubernetes Deployment

Complete K8s Manifests

k8s/django-rq.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: django-api
spec:
replicas: 3
selector:
matchLabels:
app: django-api
template:
metadata:
labels:
app: django-api
spec:
containers:
- name: django
image: your-registry/django-app:latest
command: ["gunicorn"]
args: ["api.wsgi:application", "--bind", "0.0.0.0:8000", "--workers", "4"]
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: django-config
- secretRef:
name: django-secrets
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
livenessProbe:
httpGet:
path: /cfg/health/
port: 8000
initialDelaySeconds: 30
periodSeconds: 10

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rq-worker
spec:
replicas: 4
selector:
matchLabels:
app: rq-worker
template:
metadata:
labels:
app: rq-worker
spec:
containers:
- name: worker
image: your-registry/django-app:latest
command: ["python", "manage.py", "rqworker"]
args: ["default", "high", "low", "knowledge"]
envFrom:
- configMapRef:
name: django-config
- secretRef:
name: django-secrets
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "1000m"

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rq-scheduler
spec:
replicas: 1 # Only 1 scheduler needed
selector:
matchLabels:
app: rq-scheduler
template:
metadata:
labels:
app: rq-scheduler
spec:
containers:
- name: scheduler
image: your-registry/django-app:latest
command: ["python", "manage.py", "rqscheduler"]
envFrom:
- configMapRef:
name: django-config
- secretRef:
name: django-secrets
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"

Deploy to Kubernetes

# Apply all manifests
kubectl apply -f k8s/

# Check status
kubectl get pods
kubectl get svc

# View logs
kubectl logs -f deployment/rq-worker

# Scale workers
kubectl scale deployment/rq-worker --replicas=8
Scheduler Replicas

Always keep rq-scheduler replicas at 1 to avoid duplicate scheduled jobs.


Cloud Platforms

AWS (ECS/Fargate)

ecs-task-definition.json
{
"family": "django-rq",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "1024",
"memory": "2048",
"containerDefinitions": [
{
"name": "django-api",
"image": "your-ecr-repo/django-app:latest",
"command": ["gunicorn", "api.wsgi:application"],
"portMappings": [{
"containerPort": 8000,
"protocol": "tcp"
}],
"environment": [
{"name": "DJANGO_SETTINGS_MODULE", "value": "api.settings"},
{"name": "REDIS_URL", "value": "redis://redis.cache.amazonaws.com:6379/0"}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/django-rq",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "django"
}
}
},
{
"name": "rq-worker",
"image": "your-ecr-repo/django-app:latest",
"command": ["python", "manage.py", "rqworker", "default", "high", "low"],
"environment": [
{"name": "DJANGO_SETTINGS_MODULE", "value": "api.settings"},
{"name": "REDIS_URL", "value": "redis://redis.cache.amazonaws.com:6379/0"}
]
}
]
}

Google Cloud (Cloud Run + Cloud Tasks)

cloudrun-worker.yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: django-rq-worker
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
autoscaling.knative.dev/maxScale: "10"
spec:
containers:
- image: gcr.io/your-project/django-app:latest
command: ["python", "manage.py", "rqworker"]
args: ["default", "high", "low"]
env:
- name: REDIS_URL
value: "redis://10.0.0.3:6379/0" # MemoryStore IP
resources:
limits:
memory: 1Gi
cpu: "1"

Heroku

Procfile
web: gunicorn api.wsgi:application --bind 0.0.0.0:$PORT
worker: python manage.py rqworker default high low knowledge
scheduler: python manage.py rqscheduler
# Scale dynos
heroku ps:scale web=2 worker=4 scheduler=1

# Add Redis addon
heroku addons:create heroku-redis:premium-0

Configuration Examples

Environment Variables

.env.prod
# Django Settings
DJANGO_SETTINGS_MODULE=api.settings
SECRET_KEY=your-secret-key-min-50-chars
DEBUG=False

# Redis Configuration
REDIS_URL=redis://redis:6379/0

# Database
DATABASE_URL=postgresql://user:pass@postgres:5432/dbname

# Security
ALLOWED_HOSTS=api.example.com,*.example.com
CSRF_TRUSTED_ORIGINS=https://api.example.com

# RQ Settings (optional - auto-configured by django-cfg)
# RQ_SHOW_ADMIN_LINK=True
# RQ_PROMETHEUS_ENABLED=True

Django-CFG Configuration

api/config.py
from django_cfg import DjangoConfig, DjangoRQConfig, RQQueueConfig, RQScheduleConfig

class ProductionConfig(DjangoConfig):
debug: bool = False
redis_url: str = "redis://redis:6379/0"

django_rq: DjangoRQConfig = DjangoRQConfig(
enabled=True,
queues=[
RQQueueConfig(queue="default", default_timeout=360),
RQQueueConfig(queue="high", default_timeout=180),
RQQueueConfig(queue="low", default_timeout=600),
RQQueueConfig(queue="knowledge", default_timeout=1800),
],
schedules=[
RQScheduleConfig(
func="apps.crypto.tasks.update_coin_prices",
interval=300, # Every 5 minutes
queue="default",
),
],
show_admin_link=True,
prometheus_enabled=True,
)

Monitoring in Production

Health Checks

# HTTP health check
curl http://localhost:8000/cfg/health/

# Expected response
{"status": "healthy", "checks": {"database": "ok", "redis": "ok"}}

Logging

settings.py
LOGGING = {
'version': 1,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'verbose',
},
},
'formatters': {
'verbose': {
'format': '{levelname} {asctime} {module} {message}',
'style': '{',
},
},
'loggers': {
'rq.worker': {
'handlers': ['console'],
'level': 'INFO',
},
'rq.scheduler': {
'handlers': ['console'],
'level': 'INFO',
},
},
}

Metrics Collection

Access Prometheus metrics at /django-rq/metrics/:

# Scrape metrics
curl http://localhost:8000/django-rq/metrics/

# Example metrics
rq_jobs_total{queue="default",status="finished"} 12450
rq_queue_length{queue="default"} 42
rq_workers_count{queue="default"} 4

Scaling Strategies

Horizontal Scaling

# Scale workers
docker compose up -d --scale rq-worker=8

# Verify
docker ps | grep rq-worker

Vertical Scaling

# Increase resources per worker
deploy:
resources:
limits:
cpus: '2.0' # Double CPU
memory: 2G # Double memory

Queue-Based Scaling

# Different workers for different queues
# High-priority workers (more resources)
python manage.py rqworker high --num-workers 4

# Low-priority workers (fewer resources)
python manage.py rqworker low knowledge --num-workers 2

Best Practices

Security

Production Security Checklist
  • ✅ Use strong Redis password (requirepass in redis.conf)
  • ✅ Restrict Redis network access (firewall rules)
  • ✅ Enable TLS for Redis connections
  • ✅ Use secrets management (not .env files)
  • ✅ Limit worker permissions (non-root user)
  • ✅ Enable authentication on RQ dashboard
  • ✅ Use read-only Redis for monitoring

Performance

Performance Optimization
  1. Worker Count: 1 worker per CPU core recommended
  2. Queue Separation: Separate queues for different priorities
  3. Result TTL: Set short TTL (300s) for frequent jobs
  4. Connection Pooling: Use Redis connection pool (auto-enabled)
  5. Job Timeout: Set appropriate timeouts per queue
  6. Batch Processing: Group similar tasks together

Reliability

High Availability
  • Redis: Use Redis Sentinel or Cluster for HA
  • Workers: Run at least 2 workers per queue
  • Scheduler: Run 1 scheduler with restart policy
  • Monitoring: Set up alerts for queue depth
  • Health Checks: Monitor worker processes

Troubleshooting

Workers Not Processing Jobs

# Test Redis connection
python manage.py shell
>>> import django_rq
>>> conn = django_rq.get_connection()
>>> conn.ping()
True

Common Issues

IssueCauseSolution
Jobs stuck in queueNo workers runningStart workers: python manage.py rqworker
Jobs failing silentlyExceptions not loggedCheck failed registry in admin
High memory usageResult TTL too longReduce default_result_ttl in config
Slow job processingToo few workersScale workers horizontally
Scheduler not runningProcess crashedCheck logs, restart scheduler

Migration Checklist

Migrating from ReArq or Celery? Use this checklist:

Migration from ReArq

ReArq → Django-RQ Migration

  • Update docker-compose (remove rearq services)
  • Update config.py (DjangoRQConfig)
  • Convert async tasks to sync (if needed)
  • Update enqueue calls (django_rq.enqueue())
  • Start RQ workers (python manage.py rqworker)
  • Test scheduled jobs
  • Update monitoring dashboards
  • Remove ReArq dependencies
Migration from Celery

Celery → Django-RQ Migration

  • Update config (remove Celery config)
  • Convert @task to plain functions
  • Replace task.delay() with queue.enqueue()
  • Replace periodic tasks with RQ schedules
  • Update monitoring (Flower → RQ Admin)
  • Test job execution
  • Migrate cron schedules
  • Remove Celery dependencies

Next Steps