What Does HTTP 504 Gateway Timeout Mean?
HTTP 504 Gateway Timeout means a server acting as a gateway or proxy did not receive a timely response from the upstream server it needed to complete the request. The proxy waited for a response, the timeout expired, and it gave up.
This is different from a regular timeout (where the browser times out waiting for the server). With 504, the client's request reached the proxy server successfully, but the proxy could not get the backend to respond in time. The bottleneck is between the proxy and the upstream server, not between the client and the proxy.
Every layer in the request chain has its own timeout: CDN (e.g., Cloudflare: 100s), load balancer (e.g., AWS ALB: 60s), reverse proxy (e.g., Nginx: 60s). A 504 occurs when the shortest timeout in the chain expires before the backend responds.
Common Causes
- Slow database queries: An unoptimized query scanning millions of rows, a missing index on a WHERE clause, or a deadlocked transaction can cause the application to hang while waiting for the database, exceeding the proxy's timeout.
- Long-running synchronous operations: Generating large reports, processing file uploads, running complex computations, or calling slow third-party APIs synchronously. These operations should be moved to background workers.
- Upstream server deadlock or infinite loop: A bug in the application code causes a thread to hang indefinitely. The process accepts the connection but never sends a response.
- DNS resolution failure to upstream: If the proxy resolves the upstream hostname via DNS and the DNS server is slow or down, the proxy cannot even establish a connection within its timeout.
- Network issues between proxy and backend: Firewall rules blocking or throttling traffic, packet loss on the internal network, or a congested network link causing extreme latency between the proxy and backend server.
How to Fix It
Quick Fix: Increase Timeout (Symptom, Not Cure)
Real Fix: Optimize the Slow Backend
Move to Background Processing
Add Caching to Prevent Repeated Slow Queries
Debugging Timeout Chains
When you see a 504, you need to identify which layer timed out. Check timeouts from outside in:
- CDN layer (Cloudflare, CloudFront): Check if the CDN's error page appears. Cloudflare shows its branded 504 page if its 100-second timeout expires. CloudFront shows its own error page with a request ID.
- Load balancer (ALB, ELB, HAProxy): Check ALB access logs for the
target_processing_timefield. If it equals the idle timeout, the backend did not respond in time. - Reverse proxy (Nginx, Apache): Check error logs for "upstream timed out" messages. Nginx logs the upstream address and the timeout duration.
- Application server: Add request timing logs. If the application itself does not log the request at all, the connection to it failed. If it logs receiving the request but no response, the bottleneck is inside the application (database, external API call).
Frequently Asked Questions
proxy_read_timeout in Nginx. Real fix: optimize slow queries (add indexes), move long-running tasks to background jobs, add caching layers, and implement pagination for large data sets.