413: FULL head when pushing to Mercurial repository behind Nginx - mercurial

I have a Mercurial repository running on Scm-manager proxied behind Nginx. A variety of smaller repositories run fine, so the basic setup seems OK.
Additionally, this same box runs Owncloud. I've tweaked the client_max_body_size on the server to 1000M so large files can be transferred. This works, and I have a variety of large files syncing between the server and clients.
However, when I try pushing a large Mercurial repository for the first time (1007 commits vs. about 80 for the other largest on this system) I get the following:
abort: HTTP Error 413: FULL head
Everything I've read about 413 errors doesn't seem to apply. First, it recommends setting the body size, which I've stated is already at 1G. Next, this seems to apply that the header is too large, which makes sense given that it's probably trying to check 1000+ revisions in the remote repository.
Another thing I've encountered is large_client_header_buffers. I've set this to insanely huge values like "64 128k" on both the server and http levels (read something about it not working on servers) but that didn't change anything.
I also looked at the scm-manager logs but see nothing, so this seems to stop with Nginx.
Thoughts? Here is part of my Nginx server configuration:
server {
server_name thewordnerd.info;
listen 443 ssl;
ssl_certificate /etc/ssl/certs/thewordnerd.info.crt;
ssl_certificate_key /etc/ssl/private/thewordnerd.info.key;
root /srv/www/thewordnerd.info/public;
client_max_body_size 1000M;
location /scm {
proxy_pass http://127.0.0.1:8080/scm;
include /etc/nginx/proxy_params;
}
}

The problem is the header buffer of the application server, this is because of mercurial uses very big headers. You have to increase the size of the header buffer and this application server specific. In case you are using the standalone version, you have to edit the server-config.xml and increase the requestHeaderSize value.
replace:
<Set name="requestHeaderSize">16384</Set>
with:
<Set name="requestHeaderSize">32768</Set>
Source: https://groups.google.com/forum/#!topic/scmmanager/Afad4zXSx78

I had HTTP Error: 413 (Request Entity Too Large) on my attempt to push. Resolved by adding client_max_body_size 2M; to /etc/nginx/nginx.conf. Wondering if maybe 1000M doesn't exceed the client_max_body_size...

Related

Unable to resolve .local domains with getent even though avahi-resolve-host-name succeeds

Trying to set up a network printer with CUPS.
Followed online documentation that stated:
To discover or share printers using DNS-SD/mDNS, setup .local hostname
resolution with Avahi and restart cups.service.
Followed directions for setting up Avahi to the point where avahi-browse --all --ignore-local --resolve --terminate and avahi-resolve-host-name my-domain.local are both working.
But getent hosts my-domain.local fails to resolve. This results in CUPS failing to print because it can't find my-printer.local.
I read the mdns Github page and saw a note that made me think I didn't need a /etc/mdns.allow file.
nss-mdns has a simple configuration file /etc/mdns.allow for enabling
name lookups via mDNS in other domains than .local.
Note: The "minimal" version of nss-mdns does not read /etc/mdns.allow under any circumstances. It behaves as if the file
does not exist.
In the recommended configuration, no /etc/mdns.allow file is present.
But then I saw the last note in that section:
If, during a request, the system-configured unicast DNS (specified in
/etc/resolv.conf) reports an SOA record for the top-level local name,
the request is rejected. Example: host -t SOA local returns something
other than Host local not found: 3(NXDOMAIN). This is the unicast SOA
heuristic.
I tested that out on my machine and sure enough, I was getting something OTHER than Host local not found....
Adding a /etc/mdns.allow file with a line for .local. and for .local and now I can ping my-printer.local.

HAProxy - Rewriting URL's transparently

I need to implement an URL rewriting action for a project. This has to be done with HAProxy-1.5 because it is implemented on a PfSense firewall and later versions are not available to this point.
I have the following URLS:
update.domain.com
repository.domain.com
which both point to the same backend server1. The challenge now is to move the document root:
- update.domain.com >> /some/path/repo1.
- repository.domian.com >> /some/path/repo2
Not only is the document root moved but due to a earlier implementation with TMG servers links exists that point to files like this:
update.domain.com/file1.txt
I have tried to work with http-request set-path and some ACL's on the frontend but unfortuanly this function is available with versions > haproxy-1.6
frontend www
bind *:80
acl update_url hdr_beg(host) -m beg update.domain.com
acl update_root path_beg /some/path/repo1/
http-request set-header /some/path/repo1/%[path] if !update_root update_url
use_backend testServer if update_root update_url
default_backend testServer
Links to files such as update.domain.com/file1.txt cant be changed. Keeping TMG is not a solution. How can i get this working with Haproxy-1.5?
For HAProxy 1.5, you can use reqrep, which will replace the request line (and any header lines) with what you specify in your regex, e.g something like:
reqrep ^([^\ :]*)\ /some/path/repo1/(.*) \1\ /some/path/repo2\2
A more detailed explanation of how to use reqrep can be found here.

Chrome ignores Nginx upstreams (loads only first)

I have simple setup of 3 servers (in containers) - 2 "app" servers (whoami services - so by response I can acknowledge server) and nginx server.
I've launched nginx with simple load-balancing configuration:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream myapp1 {
server w1:8000 weight=1;
server w2:8000 weight=1;
}
server {
listen 80;
location / {
proxy_pass http://myapp1/;
}
}
}
The problem is that it doesn't work in Chrome - it always loads only first server. I've tried to turn off cache in Dev console + reload via CTRL+F5 but nothing helped.
If I try to curl nginx server - I get responses in round robin manner (as expected).
Here is my containers setup:
docker network create testnw
docker run -dit --name w1 --network testnw jwilder/whoami # app1
docker run -dit --name w2 --network testnw jwilder/whoami # app2
docker run -dit --name ng --network testnw -p 8989:80 -v ${PWD}/my.conf:/etc/nginx/nginx.conf nginx # LB server
curl localhost:8989 # will get response from w1
curl localhost:8989 # will get response from w2
curl localhost:8989 # will get response from w1
...
Edit 3: Found out an interesting issue.
In chrome every time I access my website it makes two calls no matter what they are called to/of my website and /favicon.ico of my website.
I don't have a /favicon.ico.
What I think is happening
when Nginx is getting requests for/of my website, it is loading the first server upstream.
when chrome loads / from my website it also calls /favicon.ico of my website which results in making a new call to Nginx so it loads the .ico files from the next server upstream.
this happens so that servers 1,2,3 are loaded in order 1(ico file from 2),3(ico file from 1),2(ico file from 3). and cycle repeats.
once I stopped the loading of /favicon.ico in Nginx, my three upstreams servers 1,2,3 are loading in order 1,2,3 of round-robin.
I put this in the server with upstream to disable loading favicon.ico from Nginx.
location = /favicon.ico {
log_not_found off;
}
Hope anyone having this problem find this useful.
Edit 2: Figured out the issue, the load balancing is working fine with static files and static servers inside the Nginx conf file.
but my applications are being loaded by node, so had to start Nginx after starting all the node servers.
Issue reappears when I restart the application server while Nginx is running.
Now no issue will update soon
Edit 1: This is not working for me anymore, this worked yesterday, today continued working on the same configuration, the issue reappeared.
Had this same issue with my setup.
What worked for me after a lot of proxy setup and VirtualBox setup and network editing.
Add an extra server block in the HTTP block.
server{
}
and reload the Nginx service.
It worked for me, after reloading once both chrome and firefox loads the servers in the given order, I deleted the server block and it is still working.
Don't know why the issue raised in the first place.
Hope this helps to solve your issue.

lighttpd 1.4.X - Error with reverse proxy - returns 0 byte - config or program error?

I have a setup where i need a proxy in front on a server.
LightTpd 1.4.13 is already used on the embedded platform which should act as proxy.
Newer lighttpd's is not easily build due to an old toolchain.
One port (e.g. port 84) of the proxy platform should forward all traffic to port 80 on the server.
Some simple pages are forwarded just fine, but some other fail. The server has as "web_resp.exe", this is returned as a download option of 0 byte.
Wireshark dumping
Dumps with Wireshark show that the needed pages are send the proxy-platform, but 0 bytes are forwarded. (this was performed on a similar setup)
Question
Is my configuration wrong?
Is it impossible on lighttpd 1.4.13? (i have seen forum-post telling the mod_proxy of lighttpd has problem in general)
Reproducibility
I have reproduced the flaw by running Lighttpd on a new mintLinux (same error type)
I get the same error when forwarding to other ip/site (a web-config of a ethernet -> rs232-port unit).
Exactly what triggers the error is do not know, maybee just too large pages.
Configuration
#lighttpd configuration file
server.modules = (
"mod_proxy"
)
## a static document-root, for virtual-hosting take look at the
## server.virtual-* options
server.document-root = "/tmp/"
## where to send error-messages to
server.errorlog = "/tmp/lighttpd.error.log"
## bind to port (default: 80)
server.port = 84
#### proxy module
## read proxy.txt for more info
proxy.debug = 1
proxy.server = ( "" =>
(
( "host" => "10.0.0.175", "port" => 80)
)
)
Debug dumps
functional and non-functional request seem similar.
However the non-functional read larger size of data (it is still to considered small size <100 kB)
other tests
lighttpd 1.4.35 compiled for the target, but it seem to fail in same way.
lighttpd 1.4.35 neither work on the mintLinux.
1.4.35 + rewrite trick...
works worse than directly using a port
lighttpd 1.5 works out of the box (after installing gthread2) on a mintLinux. However will not work for the target hardware.
The issue have been found to be faulty http headers provided by the backend.
The issue was submitted to the Lighttpd-bug site https://redmine.lighttpd.net/issues/2594#change-8877
Lighttpd now have support for webpages only sending \LF as opposed to \CR\LF
You may argue that the bug is in the target web-page. However in by case i was unable to modify the target site.

Request Entity Too Large

I get this message,
Request Entity Too Large
The requested resource
/index.php
does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit.
I set
php_value post_max_size 50M
php_value upload_max_filesize 50M
in .htaccess but not helped
How to overcome this?
Thanks
After you are over the raising of PHP's memory_limit, post_max_size and upload_max_filesize, I would like to recommend you some articles related to the topic, maybe one of them solves the problem.
I found this post on Server Fault:
https://serverfault.com/questions/79741/php-apache-post-limit/79745#79745
sybreon suggests to double-check the Content-Length, and - citing - "ensure that you are directly connecting to Apache and not through either a proxy or a reverse-proxy. Some reverse-proxies place a cap on the maximum size of a request as a sort of security measure. So, you may want to check that as well as your Apache logs to ensure that nothing else is going on."
sybreon also posted this link: Apache 413 error problems.
The following is only applicable if you have mod_ssl module turned on in Apache. (Otherwise this setting can cause a server crash.)
Citing the article:
"I was using Apache SSL client certificates, which have a limit of 128K, and if re-negotiation has to happen, a larger POST will fail.
This Bugzilla posting had the clues - You have to set the following as DEFAULTS for your SSL server, not just the directory.
SSLVerifyClient require
Otherwise it forces a renegotiation of some sort, and fails with a 413 error."
The previous article also mentioned the LimitRequestBody directive.
A guy says here that the appropriate setting of this directive solved his problem..
I hope one of these settings solves this problem!
The only thing that would work for me was to tune up the SSL Buffer Size. You can set this by...
<Directory /my/blah/blah>
...
# Set this to something big...
SSLRenegBufferSize 10486000
...
</Directory>
...and then just restart Apache for the change to take effect. (Found this at: http://forum.joomla.org/viewtopic.php?p=2085574)
You can also use "Location /" to simply apply the setting to a whole VirtualHost:
<VirtualHost *:443>
# ...
<Location />
SSLRenegBufferSize 101048600
</Location>
# ...
</VirtualHost>
My server is Apache. It was mod_security module which was preventing post of large data approximately 171 KB.
I did below configurations in mod_security.conf
SecRequestBodyNoFilesLimit 10486000
SecRequestBodyInMemoryLimit 10486000
If max_post_upload and max_file_upload in PHP has been set,
and there is a setting in Apache2.conf or ModSec config files of LimitRequestBody set high enough
then possibly a .htaccess file will work.
Go to the directory with the upload php file in it ( the file or page throwing the error.)
2 . Make or edit .htaccess
3 . Edit or create a line with
LimitRequestBody 20971520 in it.
Save the .htaccess. Set permissions. ( 644 and apache owner)
Possibly restart apache.
Tada . Hopefully fixed.
This setting sets that limit for this folder only - which is one way to avoid a global setting in php and apache which makes you open to large packet / load DOS attacks.
LimitRequestBody 0 gives you unlimited uploads.
I was struggling with this 413 - Request entity too large problem for last day or so, as I was trying to upload farely large (in MBs) images to the server.
My setup is apache (227) proxying requests to jboss eap (6.4.20) server for accessing rest endpoints.
2 Things worked for me.
Make SSLVerifyClient required at the virtual host level. This means all the resources need a valid client cert presented to be served. This was not an option for me as all the resources except /api should NOT be mutual auth protected. So, while it worked, this was not an option for me.
I removed the global level SSLVerifyClient required and kept it 'optional'. I re enabled required option only on <Location /api>...</Location>. Trick was to have the SSL renegotiation happen only after a certain threshold is reached - which would be our desired upload file size.
So, finally it turned out that I had to enable 'SSLRenegBufferSize' setting on a specific LocationMatch as follows:
<LocationMatch ^/api/v1/path/(.*)/to/(.*)/resource/endpoint$>
SSLRenegBufferSize 5242880 #allow upto 5MB for files to come through
</LocationMatch>
(.*) in the case above represents my path params in the endpoint. Hope this helps.
After raising of PHP's memory_limit, post_max_size and upload_max_filesize in php.ini, I still had the problem.
What was also needed was the following in apache2.conf:
LimitRequestBody 1000000000
That's for a max size of 1GB.
The docs say that 0 is the default, which means unlimited. However, until I set the directive, I couldn't upload large files.
Don't forget to restart apache2.