Django select_for_update deadlock: detection and resolution

Unexpected deadlocks in Django select_for_update usually surface in high‑traffic web services or background workers, where multiple processes lock rows of the same table concurrently. Inconsistent lock ordering causes the database to wait indefinitely, silently stalling requests and breaking downstream processing.

# Example showing the issue
import threading
import time
from django.db import transaction
from myapp.models import Counter

def worker(name, first_id):
    with transaction.atomic():
        # Lock rows in an order dictated by the DB, which may differ per thread
        qs = Counter.objects.filter(id__in=[first_id, 2]).select_for_update()
        objs = list(qs)
        print(f"{name} locked {[obj.id for obj in objs]}")
        time.sleep(1)  # give the other transaction a chance to lock the opposite row

t1 = threading.Thread(target=worker, args=('Tx1', 1))
t2 = threading.Thread(target=worker, args=('Tx2', 2))
t1.start(); t2.start()
t1.join(); t2.join()
# Output (order may vary):
# Tx1 locked [1, 2]
# Tx2 locked [2, 1]
# Both transactions wait for each other's lock → deadlock

When two transactions acquire row locks in opposite order, the database cannot grant both locks, leading to a deadlock. PostgreSQL follows the classic circular‑wait model and aborts one transaction to break the cycle. This behavior mirrors standard relational‑database locking semantics. Related factors:

  • Inconsistent ORDER BY in select_for_update queries
  • Lack of nowait or timeout handling
  • High concurrency on the same table

To diagnose this in your code:

# Detect deadlocks by catching the database exception
from django.db import OperationalError, transaction
import logging

def run_transaction():
    try:
        with transaction.atomic():
            # your select_for_update logic here
            pass
    except OperationalError as exc:
        if 'deadlock' in str(exc).lower():
            logging.warning('Deadlock detected: %s', exc)
            # optionally trigger a retry or alert

Fixing the Issue

The simplest fix is to add a deterministic ordering to the select_for_update query:

qs = MyModel.objects.select_for_update().order_by('id')

This forces every transaction to lock rows in the same sequence, eliminating circular waits.

Best Practice Solution

import logging
from django.db import OperationalError, transaction

MAX_RETRIES = 3

def safe_update():
    for attempt in range(1, MAX_RETRIES + 1):
        try:
            with transaction.atomic():
                objs = (
                    MyModel.objects.select_for_update(nowait=True)
                    .order_by('id')
                )
                for obj in objs:
                    obj.value += 1
                    obj.save()
            break  # success, exit retry loop
        except OperationalError as exc:
            if 'deadlock' in str(exc).lower():
                logging.warning('Deadlock on attempt %s, retrying', attempt)
                if attempt == MAX_RETRIES:
                    raise
            else:
                raise

The production‑ready version logs deadlock occurrences, retries a configurable number of times, and always orders rows, providing a robust safeguard against concurrency‑related stalls.

What Doesn’t Work

❌ Using select_for_update without ordering: rows lock in whatever order the DB returns, creating circular waits

❌ Catching all exceptions and ignoring them: masks deadlock information and leaves transactions open

❌ Adding time.sleep() inside a transaction to “wait for lock”: merely prolongs contention and can trigger more deadlocks

  • Skipping .order_by() on select_for_update, letting the DB pick arbitrary lock order
  • Using time.sleep() inside a transaction to wait for locks, which worsens contention
  • Catching generic Exception and swallowing deadlock information

When NOT to optimize

  • Read‑only queries: No row locks are taken, so deadlock risk is nil
  • Single‑process scripts: Concurrency does not exist, ordering adds no value
  • Small tables with trivial traffic: The overhead of ordering is negligible and deadlocks are unlikely
  • One‑off data migrations: Run in a maintenance window where contention is controlled

Frequently Asked Questions

Q: How many times should I retry a deadlocked transaction?

Three attempts are typical; adjust based on observed contention.

Q: Is nowait=True enough to avoid deadlocks?

It prevents waiting but still raises a deadlock exception; combine with ordering and retries.


Deadlocks are a subtle but common pitfall in high‑throughput Django apps. By enforcing a consistent lock order and handling the deadlock exception with retries, you turn a flaky failure into a predictable recovery path. Apply these patterns early to keep your production workload stable.

Why Django database connection lifetime is too shortWhy Django middleware order changes request handlingWhy Django ORM N+1 query slows performanceWhy Django form clean runs after clean_ order