Django database connection lifetime: detection and resolution

Unexpected connection drops in Django applications often surface in high‑traffic APIs or background workers, where the default connection lifetime is set to zero. This forces Django to close the database socket at the end of every request, adding latency and risking transient failures. Adjusting CONN_MAX_AGE can keep connections alive safely.

# Example showing the issue
from django.db import connection

def simulate_request():
    # Django opens a new DB connection for each request when CONN_MAX_AGE=0
    cursor = connection.cursor()
    cursor.execute('SELECT 1')
    cursor.close()

print(f'Before any request: connection is None? {connection.connection is None}')
for i in range(3):
    simulate_request()
    print(f'After request {i+1}: connection is None? {connection.connection is None}')
# Expected output shows the connection is None after each request, indicating it was closed

Django sets CONN_MAX_AGE to 0 by default, which tells the ORM to close the DB socket at the end of every HTTP request. This mirrors a per‑request connection strategy and prevents stale connections, but it also forces a new TCP handshake for each request, leading to latency spikes and occasional disconnect errors. This behavior is documented in the Django database settings guide. Related factors:

  • High request volume amplifies connection churn
  • Database servers with limited connection pools
  • Network latency between app and DB

To diagnose this in your code:

# Check the current lifetime setting
from django.conf import settings
print(f"CONN_MAX_AGE: {settings.DATABASES['default'].get('CONN_MAX_AGE', 0)}")

# Verify whether a connection is being reused
from django.db import connections
conn = connections['default']
print(f"Connection alive before request: {conn.connection is not None}")
# Run a request simulation (as in example_issue) and inspect again

Fixing the Issue

The quickest fix is to set a non‑zero lifetime in your settings:

# settings.py
DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'mydb',
        'USER': 'myuser',
        'PASSWORD': 'secret',
        'HOST': 'db.example.com',
        'PORT': '5432',
        'CONN_MAX_AGE': 300,  # keep connections open for 5 minutes
    }
}

This change immediately reduces connection churn and eliminates the “connection closed unexpectedly” symptoms during development.

For production you usually want a more defensive approach:

# settings.py – production profile
import os

MAX_AGE = int(os.getenv('DJANGO_CONN_MAX_AGE', '600'))  # default 10 min

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': os.getenv('POSTGRES_DB'),
        'USER': os.getenv('POSTGRES_USER'),
        'PASSWORD': os.getenv('POSTGRES_PASSWORD'),
        'HOST': os.getenv('POSTGRES_HOST'),
        'PORT': os.getenv('POSTGRES_PORT', '5432'),
        'CONN_MAX_AGE': MAX_AGE,
        'OPTIONS': {
            # Enable TCP keep‑alive to detect dead peers early
            'keepalives': 1,
            'keepalives_idle': 30,
            'keepalives_interval': 10,
            'keepalives_count': 5,
        },
    }
}

# Optional runtime guard – log if a connection is being reopened too often
from django.db import connection
import logging

logger = logging.getLogger(__name__)

if connection.is_usable():
    logger.debug('Reusing existing DB connection')
else:
    logger.warning('Opening new DB connection – check CONN_MAX_AGE')

The gotcha here is that setting a very high lifetime on a DB server with strict max‑connection limits can exhaust the pool. Monitoring connection usage in production and tuning MAX_AGE accordingly is essential.

What Doesn’t Work

❌ Setting CONN_MAX_AGE = -1: Django treats negative values as 0, so connections still close each request.

❌ Wrapping every query in with connection.cursor() as cur: and then manually closing: This defeats the purpose of persistent connections and adds overhead.

❌ Increasing MAX_CONNECTIONS on the DB server without adjusting CONN_MAX_AGE: It masks the symptom but can exhaust server resources.

  • Leaving CONN_MAX_AGE at 0 in a high‑traffic service
  • Setting CONN_MAX_AGE to an extremely high value without checking DB max‑connections
  • Confusing CONN_MAX_AGE with Django’s connection pooling libraries

When NOT to optimize

  • Local development: Small test servers rarely hit connection limits, so the default is fine.
  • One‑off management scripts: Scripts that run once and exit don’t benefit from persistent connections.
  • SQLite back‑ends: SQLite uses a file‑based DB and ignores CONN_MAX_AGE.
  • Low‑traffic admin panels: If traffic is negligible, the overhead of a per‑request connection is minimal.

Frequently Asked Questions

Q: Can I set CONN_MAX_AGE to None for unlimited lifetime?

Yes, None disables connection timeout, but you must ensure the DB server can handle long‑lived sockets.

Q: Does changing CONN_MAX_AGE affect SQLite databases?

No, SQLite ignores this setting because it uses a file handle rather than a network socket.


Managing Django’s database connection lifetime is a small tweak with big impact on latency and reliability. By configuring CONN_MAX_AGE sensibly and adding keep‑alive options, you protect your services from needless reconnect storms while keeping the pool healthy. Remember to monitor connection usage after any change to avoid surprising resource exhaustion.

Why Django select_for_update can cause deadlocksWhy Django middleware order changes request handlingWhy SQLAlchemy expire differs from refreshWhy Django ORM N+1 query slows performance