At this time, gateway will take the disconnected connection request from the connection pool, resulting in an error Solution: Because the server is the provider and the gateway is the consumer, try to ensure that the consumer disconnects before the provider, and the time for setting Max idle time is not greater than connection timeout It should be noted that this timeout cannot usually exceed 75 seconds." - Richard Herries Jan 30, 2019 at 8:11 According to NGINX docs the connect time out can't be longer than 75s "Defines a timeout for establishing a connection with a proxied server. Client receives a response (as per tcpdump) Client closes its side of the connection, sending FIN segment to the server Client fails with "Connection prematurely closed BEFORE response" We're unable to reproduce the situation in our test environment, it happens purely in production. bug. Projects. Everything works fine for a while and then all of the sudden the site stops working and requests keep timing out. The script installed nginx and mono and setup the nginx.conf and other configuration files. While processing file where is no response back to user and gunicorn worker is killed due to timeout. However, NGINX drops the connection exactly at 60 seconds -. This looks like long-running synchronous request - user uploads large file, it is being processed in the view which takes significant amount of time and only after a response to user's request is sent back. what miight be the probable cases from nginx which could cause the issue? Nginx: upstream prematurely closed connection while reading response header from upstream. Open NGINX configuration file Open terminal and run the following command to open NGINX configuration file. 8 comments Assignees. 1. Here are the steps to fix NGINX: Upstream Closed Prematurely error. The option "uwsgi_read_timeout" does its job for anything less than 60 . 1Connection prematurely closed BEFORE response . in our case, we have Bearer token which is large in size. . This is what making us believe that there is some network issue between these VM's or some issue with the application load balancer itself. we are seeing sporadic nginx errors "upstream prematurely closed connection while reading response header from upstream" with nginx/1.6.2 which seems to be some kind of race condition. The default value "on" instructs nginx to wait for and process additional data from a client before fully closing a connection , but only if heuristics suggests that a client may be sending more data. "upstream prematurely closed connection while reading response header from upstream". Pedro Carvalho Asks: NGINX - HTTP 500 upstream prematurely closed connection while reading response header from upstream We have an Apache server inside a docker container, and NGINX in another one as a proxy server. For some connections, we see connection getting closed with in 2 seconds and some times, it gets closed after 20 seconds etc. 2015/06/01 18:48:07 [error] 82618#0: *5006 upstream prematurely closed connection while reading response header from upstream, client: . What I then did was remove this installation, installed postgresql and setup the database, and replaced the files in $HOME/www/ with my own MVC 4 web application which works locally by copying it there. We are facing same issue. Then when running it I see a 502: bad gateway error. The error log has the following entry in it Hello, . Labels. For debugging purposes we only setup 1 upstream server on a public IP address of the same server as nginx, there is no keepalive configured Milestone. the problem seems to be that whenever you use webclient, you have to return or use the response, otherwise it will close the connection and you didn't consume it yet, and you will see a lot of log messages saying that the connection close prematurely, if i had a scenario where a 404 status code is a error i could just use onstatus and throw an FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream ; upstream prematurely closed connection while >reading</b> response header from upstreamnginx. workaround: Instead of doing WebClient.create(url), use the builder and create a new connection: . Nginxrecv() failed (104: Connection reset by peer) while reading response header from upstream ; . Asked by hn. That is, when send_timeout occurs, with reset_timedout_connection on; nginx will instruct kernel to drop the data from the socket buffer when the socket is closed instead of trying to send the data till TCP times out as well. The value "always" will cause nginx to unconditionally wait for and process additional. Posted on; December 12, 2013. I couldn't reproduce the problem on nginx for example. So, it is clear that the Website's Docker container has a faulty web server. [ Not using Docker, and still getting this error? if we remove some claims and generate token, it goes through, but as the Authorization header size increases, we are getting Connection prematurely closed BEFORE response 5 comments Closed . My script continues to run and completes after 70 seconds, however my browser connection has died before then (502 error). $ sudo vi /etc/nginx/nginx.conf 2. we are seeing sporadic nginx errors "upstream prematurely closed connection while reading response header from upstream" with nginx/1.6.2 which seems to be some kind of race condition. Our Server Experts can fix it for you in a few minutes. For debugging purposes we only setup 1 upstream server on a public IP address of the same server as nginx, there is no keepalive configured Increase Proxy Timeout Add these following lines to increase proxy timeout for upstream server. Hoxton.SR10 2020.0.1. Sometimes my website will stop answering with errors HTTP 500 or HTTP 502. The part of the error that says " upstream prematurely closed connection " means that the Nginx container was unable to get website content from the back-end website. Add a route like this one in the main configuration file: Run the gateway behind a reverse proxy (Nginx in my case) Load test the gateway with a high number of requests/sec and many concurrent workers Set a value for the maxIdleTime ( null -> 100 ms) Set the pool type to FIXED instead of ELASTIC to avoid getting too much opened connections nginx proxy client prematurely closed connection We're using nginx as a proxy. The main goal of this directive is to reset connections which timed out while sending a response. Some requests go to server A and others to server B. Nginx500504500504 . Controls how nginx closes client connections .