Nginx PHP Failing with Large File Uploads (Over 6 GB)


Nginx PHP Failing with Large File Uploads (Over 6 GB)



I am having a very weird issue uploading larges files over 6GB. My process works like this:



My PHP(HHVM) and NGINX configuration both have their configuration set to allow up to 16GB of file, my test file is only 8GB.



Here is the weird part, the ajax will ALWAYS time out. But the file is successfully uploaded, its gets copied to the tmp location, the location stored in the db, s3, etc. But the AJAX runs for an hour even AFTER all the execution is finished(which takes 10-15 minutes) and only ends when timing out.



What can be causing the server not send a response for only large files?



Also error logs on server side are empty.





You need to post the code if you want some help figuring out what might be wrong with it.
– Barmar
Jun 5 '17 at 15:04





Do you make repeated use of session_write_close() and session_start() in your ajax upload script? This can cause issues.
– drew010
Jun 5 '17 at 15:27





@devin has the problem been solved?
– Anatoly
Jun 11 at 6:57




1 Answer
1



A large file upload is an expensive and error prone operation. Nginx and backend should have correct timeout configured to handle slow disk IO if occur. Theoretically it is straightforward to manage file upload using multipart/form-data encoding RFC 1867.



According to developer.mozilla.org in a multipart/form-data body, the HTTP Content-Disposition general header is a header that can be used on the subpart of a multipart body to give information about the field it applies to. The subpart is delimited by the boundary defined in the Content-Type header. Used on the body itself, Content-Disposition has no effect.



Let's see what happens while file being uploaded:



1) client sends HTTP request with the file content to webserver



2) webserver accepts the request and initiates data transfer (or returns error 413 if the file size is exceed the limit)



3) webserver starts to populate buffers (depends on file and buffers size)



4) webserver sends file content via file/network socket to backend



5) backend authenticates initial request



6) backend reads the file and cuts headers (Content-Disposition, Content-Type)



7) backend dumps resulted file on to disk



8) any follow up procedures like database changes



client_body_in_file_only off;



During large files upload several problems occur:



Let's start with Nginx configured with new location http://backend/upload to receive large file upload, back-end interaction is minimised (Content-Legth: 0), file is being stored just to disk. Using buffers Nginx dumps the file to disk (a file stored to the temporary directory with random name, it can not be changed) followed by POST request to backend to location http://backend/file with the file name in X-File-Name header.



To keep extra information you may use headers with initial POST request. For instance, having X-Original-File-Name headers from initial requests help you to match file and store necessary mapping information to the database.



client_body_in_file_only on;



Let's see how make it happen:



1) configure Nginx to dump HTTP body content to a file and keep it stored client_body_in_file_only on;



2) create new backend endpoint http://backend/file to handle mapping between temp file name and header X-File-Name



4) instrument AJAX query with header X-File-Name Nginx will use to send post upload request with



Configuration:


location /upload {
client_body_temp_path /tmp/;
client_body_in_file_only on;
client_body_buffer_size 1M;
client_max_body_size 7G;

proxy_pass_request_headers on;
proxy_set_header X-File-Name $request_body_file;
proxy_set_body off;
proxy_redirect off;
proxy_pass http://backend/file;
}



Nginx configuration option client_body_in_file_only is
incompatible with multi-part data upload, but you can use it with AJAX
i.e. XMLHttpRequest2 (binary data).



If you need to have back-end authentication, only way to handle is to use auth_request, for instance:


location = /upload {
auth_request /upload/authenticate;
...
}

location = /upload/authenticate {
internal;
proxy_set_body off;
proxy_pass http://backend;
}



client_body_in_file_only on; auth_request on;



Pre-upload authentication logic protects from unauthenticated requests regardless of the initial POST Content-Length size.





Brilliant! How would you configure this to operate without the proxy? Would it work to add the line return 201 $request_body_file; instead?
– bcattle
Aug 22 '17 at 3:31



return 201 $request_body_file;





@bcattle proxy_pass shouldn't be necessarily a backend upstream URL, it can be a named internal location, what do you try to achieve?
– Anatoly
Aug 22 '17 at 6:17






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

api-platform.com Unable to generate an IRI for the item of type

How to set up datasource with Spring for HikariCP?

Display dokan vendor name on Woocommerce single product pages