Request Entity Too Large - configuration

I get this message,
Request Entity Too Large
The requested resource
/index.php
does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit.
I set
php_value post_max_size 50M
php_value upload_max_filesize 50M
in .htaccess but not helped
How to overcome this?
Thanks

After you are over the raising of PHP's memory_limit, post_max_size and upload_max_filesize, I would like to recommend you some articles related to the topic, maybe one of them solves the problem.
I found this post on Server Fault:
https://serverfault.com/questions/79741/php-apache-post-limit/79745#79745
sybreon suggests to double-check the Content-Length, and - citing - "ensure that you are directly connecting to Apache and not through either a proxy or a reverse-proxy. Some reverse-proxies place a cap on the maximum size of a request as a sort of security measure. So, you may want to check that as well as your Apache logs to ensure that nothing else is going on."
sybreon also posted this link: Apache 413 error problems.
The following is only applicable if you have mod_ssl module turned on in Apache. (Otherwise this setting can cause a server crash.)
Citing the article:
"I was using Apache SSL client certificates, which have a limit of 128K, and if re-negotiation has to happen, a larger POST will fail.
This Bugzilla posting had the clues - You have to set the following as DEFAULTS for your SSL server, not just the directory.
SSLVerifyClient require
Otherwise it forces a renegotiation of some sort, and fails with a 413 error."
The previous article also mentioned the LimitRequestBody directive.
A guy says here that the appropriate setting of this directive solved his problem..
I hope one of these settings solves this problem!

The only thing that would work for me was to tune up the SSL Buffer Size. You can set this by...
<Directory /my/blah/blah>
...
# Set this to something big...
SSLRenegBufferSize 10486000
...
</Directory>
...and then just restart Apache for the change to take effect. (Found this at: http://forum.joomla.org/viewtopic.php?p=2085574)
You can also use "Location /" to simply apply the setting to a whole VirtualHost:
<VirtualHost *:443>
# ...
<Location />
SSLRenegBufferSize 101048600
</Location>
# ...
</VirtualHost>

My server is Apache. It was mod_security module which was preventing post of large data approximately 171 KB.
I did below configurations in mod_security.conf
SecRequestBodyNoFilesLimit 10486000
SecRequestBodyInMemoryLimit 10486000

If max_post_upload and max_file_upload in PHP has been set,
and there is a setting in Apache2.conf or ModSec config files of LimitRequestBody set high enough
then possibly a .htaccess file will work.
Go to the directory with the upload php file in it ( the file or page throwing the error.)
2 . Make or edit .htaccess
3 . Edit or create a line with
LimitRequestBody 20971520 in it.
Save the .htaccess. Set permissions. ( 644 and apache owner)
Possibly restart apache.
Tada . Hopefully fixed.
This setting sets that limit for this folder only - which is one way to avoid a global setting in php and apache which makes you open to large packet / load DOS attacks.
LimitRequestBody 0 gives you unlimited uploads.

I was struggling with this 413 - Request entity too large problem for last day or so, as I was trying to upload farely large (in MBs) images to the server.
My setup is apache (227) proxying requests to jboss eap (6.4.20) server for accessing rest endpoints.
2 Things worked for me.
Make SSLVerifyClient required at the virtual host level. This means all the resources need a valid client cert presented to be served. This was not an option for me as all the resources except /api should NOT be mutual auth protected. So, while it worked, this was not an option for me.
I removed the global level SSLVerifyClient required and kept it 'optional'. I re enabled required option only on <Location /api>...</Location>. Trick was to have the SSL renegotiation happen only after a certain threshold is reached - which would be our desired upload file size.
So, finally it turned out that I had to enable 'SSLRenegBufferSize' setting on a specific LocationMatch as follows:
<LocationMatch ^/api/v1/path/(.*)/to/(.*)/resource/endpoint$>
SSLRenegBufferSize 5242880 #allow upto 5MB for files to come through
</LocationMatch>
(.*) in the case above represents my path params in the endpoint. Hope this helps.

After raising of PHP's memory_limit, post_max_size and upload_max_filesize in php.ini, I still had the problem.
What was also needed was the following in apache2.conf:
LimitRequestBody 1000000000
That's for a max size of 1GB.
The docs say that 0 is the default, which means unlimited. However, until I set the directive, I couldn't upload large files.
Don't forget to restart apache2.

Related

Unable to resolve .local domains with getent even though avahi-resolve-host-name succeeds

Trying to set up a network printer with CUPS.
Followed online documentation that stated:
To discover or share printers using DNS-SD/mDNS, setup .local hostname
resolution with Avahi and restart cups.service.
Followed directions for setting up Avahi to the point where avahi-browse --all --ignore-local --resolve --terminate and avahi-resolve-host-name my-domain.local are both working.
But getent hosts my-domain.local fails to resolve. This results in CUPS failing to print because it can't find my-printer.local.
I read the mdns Github page and saw a note that made me think I didn't need a /etc/mdns.allow file.
nss-mdns has a simple configuration file /etc/mdns.allow for enabling
name lookups via mDNS in other domains than .local.
Note: The "minimal" version of nss-mdns does not read /etc/mdns.allow under any circumstances. It behaves as if the file
does not exist.
In the recommended configuration, no /etc/mdns.allow file is present.
But then I saw the last note in that section:
If, during a request, the system-configured unicast DNS (specified in
/etc/resolv.conf) reports an SOA record for the top-level local name,
the request is rejected. Example: host -t SOA local returns something
other than Host local not found: 3(NXDOMAIN). This is the unicast SOA
heuristic.
I tested that out on my machine and sure enough, I was getting something OTHER than Host local not found....
Adding a /etc/mdns.allow file with a line for .local. and for .local and now I can ping my-printer.local.

"Could not get any response" response when using postman with subdomain

I am using postman to test an API I have, all is good when the request does not contain sub-domain, however when I add a sub-domain to URL I am getting this response.
Could not get any response
There was an error connecting to http://subdomain.localhost:port/api/
Why this might have happened:
The server couldn't send a response:Ensure that the backend is working
properly
Self-signed SSL certificates are being blocked:Fix this by turning off
'SSL certificate verification' in Settings > General
Proxy configured incorrectly Ensure that proxy is configured correctly
in Settings > Proxy
Request timeout:Change request timeout in Settings > General
If I copy the same URL from postman and paste it into the browser I get a proper response, is there some kind of configurations I should do to make postman work with sub-domains?
First Go to Settings in Postman:
Off the SSL certificate verification in General Tab:
Off the Global Proxy Configuration and Use System Proxy in Proxy Tab:
Make Request Timeout to 0 (Zero)
Configure Apache:
If the above changes resulted in a 404 response, then continue reading ;-)
Users that host their site locally (like with XAMP and/or WAMP), may be able to visit their virtual sites using https:// prefixed address, but it's a lie, and to really enable SSL (for each virtual-site), configure Apache like:
Open httpd-vhosts.conf file (from Apache's conf/extras directory), in your preferred text editor.
Change the virtual site's settings, into something like:
<VirtualHost *:80 *:443>
ServerName my-site.local
ServerAlias *.my-site.local
DocumentRoot "C:\xampp\htdocs\my-project\public"
SSLEngine on
SSLCertificateFile "path/to/my-generated.cert"
SSLCertificateKeyFile "path/to/my-generated.key"
SetEnv APPLICATION_ENV "development"
<Directory "C:\xampp\htdocs\my-project\public">
Options Indexes FollowSymLinks
AllowOverride All
Order allow, deny
Allow from all
</Directory>
</VirtualHost>
But of course, generate a dummy-SSL-certificate, and change all file paths, like from "path/to/my-generated.cert" into real file addresses.
Finally, test by visiting the local site in the browser, but using http:// (without S) prefixed address; Apache should now give error like:
Bad Request
Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.
I had the same issue. It was caused by a newline at the end of the "Authorization" header's value, which I had set manually by copy-pasting the bearer token (which accidentally contained the newline at its end)
If you get a "Could not get any response" message from Postman native apps while sending your request, open Postman Console (View > Show Postman Console), resend the request and check for any error logs in the console.
Thanks to numaanashraf
Hi This issue is resolved for me.
setting ->general -> Requesttimeout in ms = 0
If all above methods doesn't work check your environment variables, And make sure that the following environments are not set. If those are set and not needed by any other application remove them.
HTTP_PROXY
HTTPS_PROXY
Reference link
For me it was the http://localhost instead of https://localhost.
When getting the following error,
you need to do the following.
Step 1:
In Postman, click the wrench icon, go to settings, then go to the Proxy tab.
Step 2:
Create a custom Proxy. This article explains how to create a custom proxy.
After you create the custom Proxy, make sure you turn the Proxy toggle button to off. I put 61095 in for the proxy server and it worked for me.
Step 3 :
Success
I came up with this solution
In postman go to setting --> proxy
And off Global Proxy Configuration
on the Use System Proxy
And go to windows host configure file
'C:\Windows\System32\drivers\etc\hosts'
Open that file in administrator mode
And add the sub domain to hosts file
For me what worked was to add 127.0.0.1 subdomain.localhost to my host file. On OSX that was /etc/hosts. Not sure why that was necessary as I could reach the subdomain from chrome.
In postman go to setting --> proxy
And off Global Proxy Configuration
For me, it was that route that I was calling in my node server wasn't returning anything. Adding
return res.status(200).json({
message: 'success!',
response: 'success!'
});//
to the route I was calling resolved the issue.
You mentioned you are using a CER certificate.
According to the Postman page on certificates.
Choose your client certificate file in the CRT file field. Currently, we only support the CRT format. Support for other formats (like PFX) will come soon.
The name of the extension CER, CRT doesn't make the certificate that type of certificate but, these are the excepted extensions names.
CER is an X.509 certificate in binary form, DER encoded.
CRT is a binary X.509 certificate, encapsulated in text (base-64) encoding.
You can use OpenSSL to change a CER file into a CRT file. I have not had good luck with it but it looks like this.
openssl x509 -inform PEM -in certificate.cer -out certificate.crt
or
openssl x509 -inform DER -in certificate.cer -out certificate.crt
Postman for Linux Version 6.7.1 - Ubuntu 18.04 - linux 4.15.0-43-generic / x64
I had the same problem and by chance I replaced http://localhost with http://127.0.0.1 and everything worked.
My etc/hosts had the proper entries for localhost and https://localhost requests always worked as expected.
I have no clue why changing localhost for http with 127.0.0.1 solved the issue.
None of these solutions works for me. Postman is not sending any request to the server because postman is not finding the host. So, if you modify your /etc/hosts to
127.0.0.1 localhost
127.0.0.1 subdomain.localhost
It works for me.
For me the issue was that the Content-Length was too big. I placed the content of the body in NotePad++ and counted the characters and put that figure in PostMan and then it worked.
I know it does not directly answer why the op's sub-domain was not working but it might help out someone.
In my case it was invisible spaces that postman didn't recognize, the above string of text renders as without spaces in postman.
I disabled SSL certificate Validation and System Proxy even tried on postman chrome extension(which is about to be deprecated), but when I downloaded and tried Insomnia and it gave those red dots in the place where those spaces were, must have gotten there during copy/paste
For anyone who experienced this issue with real domain instead of localhost and couldn't solve it using ANY OF THE ABOVE solutions.
Try changing your Network DNS (WIFI or LAN) to some other DNS. For me, I used Google DNS 8.8.8.8, 8.8.4.4 and it worked!
solution is very simple if you are using asp.net core 2 application . Inside ConfigureServices method inside startup.cs file all this line
services.AddMvc()
.SetCompatibilityVersion(CompatibilityVersion.Version_2_1)
.AddJsonOptions(x => x.SerializerSettings.ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Ignore);
You just need to turn SSL off to send your request.
Proxy and others come with various errors.
My issue was by putting wrong parameters in the header,
the requested parameters was
Authorization: Token <string>
and is was trying
Authorization Token: <string>
After all the above methods like turning OFF SSL certificate verification, turning ON only Use System Proxy and removing HTTP_PROXY and HTTPS_PROXY system environment variables, it worked.
Note: Had to restart the Postman app, since the environment variables were changed.
Unchecking proxy and SSL Certificate Verification didn't work for me.
Unsetting PROXY environment variables did the trick.
export http_proxy=
export ftp_proxy=
export https_proxy=
Change to the directory where Postman is installed and then:
./Postman
In my case, MVC wasn't able to serialize the results (I accidentally used a model instead of DTO). I debugged down to passing a simple string, which worked. Once I fixed the serialization it all came up.
In my case the (corporate) proxy was using a self-signed SSL certificate which Postman disliked. I discovered it by activating
View->Show Postman console
and retrying the request. The console then showed the certificate error. In
Settings->General
I disabled
SSL certificate verification.
The solution for me, as I'm using the deprecated Postman extension for Chrome, to solve this issue I had to:
Call some GET request using the Chrome Browser itself.
Wait for the error page "Your connection is not private" to appear.
Click on ADVANCED and then proceed to [url] (unsafe) link.
After this, requests through the extension itself should work.
In my case it was a misconfigured subnet. Only one of the 2 subnets in the ELB worked.
I figured this out by doing a nslookup and trying to curl the returned IPs directly. Only one worked.
Postman just kept using the misconfigured one.
I had the same issue.
Turned out my timeout was set too low. I changed it to 30ms thinking it was 30sec. I set it back to 0 and it started working again.
I got the same "Could not get any response" issue because of wrong parameter in header. I fixed it by removing parameter HOST out of header.
PS: Unfortunately, I was pushed to install the other software to get this information. It should be great to get this error message from Postman instead of getting general nonsense.
In my case, I forgot to set the value of the variable in the "CURRENT VALUE" field.
I just experienced this error. In my case, the path was TOO LONG. So url like that gave me this error in postman (fake example)
http://127.0.0.1:5000/api/batch/upload_import_deactivate_from_ready_folder
whereas
http://127.0.0.1:5000/api/batch/upld_impt_deac_ready_folder
worked fine.
Hope it helps someone who by accident read that far...

413: FULL head when pushing to Mercurial repository behind Nginx

I have a Mercurial repository running on Scm-manager proxied behind Nginx. A variety of smaller repositories run fine, so the basic setup seems OK.
Additionally, this same box runs Owncloud. I've tweaked the client_max_body_size on the server to 1000M so large files can be transferred. This works, and I have a variety of large files syncing between the server and clients.
However, when I try pushing a large Mercurial repository for the first time (1007 commits vs. about 80 for the other largest on this system) I get the following:
abort: HTTP Error 413: FULL head
Everything I've read about 413 errors doesn't seem to apply. First, it recommends setting the body size, which I've stated is already at 1G. Next, this seems to apply that the header is too large, which makes sense given that it's probably trying to check 1000+ revisions in the remote repository.
Another thing I've encountered is large_client_header_buffers. I've set this to insanely huge values like "64 128k" on both the server and http levels (read something about it not working on servers) but that didn't change anything.
I also looked at the scm-manager logs but see nothing, so this seems to stop with Nginx.
Thoughts? Here is part of my Nginx server configuration:
server {
server_name thewordnerd.info;
listen 443 ssl;
ssl_certificate /etc/ssl/certs/thewordnerd.info.crt;
ssl_certificate_key /etc/ssl/private/thewordnerd.info.key;
root /srv/www/thewordnerd.info/public;
client_max_body_size 1000M;
location /scm {
proxy_pass http://127.0.0.1:8080/scm;
include /etc/nginx/proxy_params;
}
}
The problem is the header buffer of the application server, this is because of mercurial uses very big headers. You have to increase the size of the header buffer and this application server specific. In case you are using the standalone version, you have to edit the server-config.xml and increase the requestHeaderSize value.
replace:
<Set name="requestHeaderSize">16384</Set>
with:
<Set name="requestHeaderSize">32768</Set>
Source: https://groups.google.com/forum/#!topic/scmmanager/Afad4zXSx78
I had HTTP Error: 413 (Request Entity Too Large) on my attempt to push. Resolved by adding client_max_body_size 2M; to /etc/nginx/nginx.conf. Wondering if maybe 1000M doesn't exceed the client_max_body_size...

Editing php.ini to allow certain features

Okay, so I've been testing and building a website on my computer through localhost. Everything works fine on my computer! I wanted to then upload it to my godaddy hosting account. Then I get an error. I am using json_decode as an argument for one of for each loops in my php. When I'm running my site through a hosting provider it tells me there is an invalid argument in the foreach() loop on like 43. So, I knew it had to do with my php.ini file, so I copied the one from my computer and pasted it in the php.ini file on godaddy, for my site. Then the foreach() loop worked! But, then all kinds of hell broke loose. Session problems and such. So, my question is, what do I need to add to make json_decode work?
Thanks
Here is my php.ini file with the hosting provider:
register_globals = off
allow_url_fopen = off
expose_php = Off
max_input_time = 60
variables_order = "EGPCS"
extension_dir = ./
extension=json.so
upload_tmp_dir = /tmp
precision = 12
SMTP = relay-hosting.secureserver.net
url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=,fieldset="
; Only uncomment zend optimizer lines if your application requires Zend Optimizer
support
;[Zend]
;zend_optimizer.optimization_level=15
;zend_extension_manager.optimizer=/usr/local/Zend/lib/Optimizer-3.3.3
;zend_extension_manager.optimizer_ts=/usr/local/Zend/lib/Optimizer_TS-3.3.3
;zend_extension=/usr/local/Zend/lib/Optimizer-3.3.3/ZendExtensionManager.so
;zend_extension_ts=/usr/local/Zend/lib/Optimizer_TS-3.3.3/ZendExtensionManager_TS.so
; -- Be very careful to not to disable a function which might be needed!
; -- Uncomment the following lines to increase the security of your PHP site.
;disable_functions = "highlight_file,ini_alter,ini_restore,openlog,passthru,
; phpinfo, exec, system, dl, fsockopen, set_time_limit,
;
You cant just replace the php.ini file because it has paths hard coded.
For example, with your session error, most likely the setting session.save_path is referencing a directory that doesn't exist or incorrect permissions.
Can you post the line of code that was on line 43? I am guessing that your local php.ini doesn't display the error whereas the godaddy config does.

redis.conf include: "Bad directive or wrong number of arguments"

I've created this config for redis [/etc/redis/map.conf]:
include /etc/redis/ideal.conf
port 11235
pidfile /var/run/redis-map.pid
logfile /var/log/redis/map.log
dbfilename map.rdb
As you can see, it includes /etc/redis/ideal.conf; this file actually exists and we have read permissions.
Also there is another file, slightly different; consider [/etc/redis/storage.conf]:
include /etc/redis/ideal.conf
pidfile /var/run/redis-storage.pid
port 8000
bind 192.168.0.3
logfile /var/log/redis/storage.log
dbfilename dump_storage.rdb
My problem is: I can launch redis-server with storage.conf (and everything works fine), but map.conf leads to the following error:
Reading the configuration file, at line 1
>>> 'include /etc/redis/ideal.conf'
Bad directive or wrong number of arguments
failed
Version of redis is 2.2.
Where did I go wrong?
Sorry guys.
I was using different instances of Redis.
Instance for storage.conf was launched by /usr/local/bin/redis-server, but map.conf launched by /usr/bin/redis-server; second one is broken.
Thank you anyway.