HTTP Error 502: Bad Gateway when pushing to bitbucket - mercurial

I have a mercurial repository, when I try to push my changes to bitbucket I suddenly get the error
HTTP Error 502: Bad Gateway
after a long moment of wait (searching changes..)
Any idea? this has had me stumped for two days!

Some people report similar issues when trying to push big changesets using the HTTP protocol. Try using SSH instead. You can find instructions for Bitbucket here.

Related

OpenShift - started getting 404 suddenly yesteray

Yesterday i started to get a 404 error on my free account on openshift. I can log in via sftp and see the files in the /app-root/runtime/repo directory but when i navigate to the page i get a 404. Does anyone have any ideas on what this could be?
I ended up just deleting the gear, recreating it and then just uploading all the files again. Not a huge deal but kinda of a pain in the butt however it is a free service so i'll put down my pitch fork and torch.
Next time you observe this, SSH into the gear and inspect the logs at ~/app-root/logs (or, for some cartridges, ~/CART-NAME/logs).

Mercurial - how to force author to ldap login on push?

My objective: force author in my mercurial server to match with my ldap user.
Explanation:
I have my mercurial server. It use ldap authentification and autorisation from apache.
On a client desktop, i can clone/commit/push my repositories (i use my ldap creditential for that)
Issue:
The client commit on it desktop using the username he wants (exemple: mydummyusername)
When the client push, he must enter his ldap login/password (login john for example).
Then, when i look to my server using http, unfortunatly i see the author 'mydummyusername'. (Note: i expected to see the author 'john', 'mydummyusername' does not exist in the ldap)
Ideas:
writing a hook - not working using apache http (and my friend google told me that it only works with ssh urls :-/)
hgwebdir.cgi: trying to get ldap username using python (os.environ['REMOTE_USER']) and force to affect it to the mercurial object : not working (and i think bad idea)
Do you have any idea ?
Thank you :)
This is generally a bad idea. In a DVS it's very normal for someone to pull changesets from one person and push them to another. Or merge someone else's branch into theirs and then push changesets from both.
If you can't trust the people who have LDAP access to your system to not forge the author: on their commits you have organizational problems not technical ones.
(Also, your friend is wrong hooks work just fine on http and ssh)

cannot push changes to repository on webDAV

Today i've tried to push changes into our shared repository hosted on an apache(2.2.x) running webdav(over HTTPS).
The repository in the dav-directory is a clone of my working directory. Option NoUpdate is enabled. Both Repositories are initiated.
To move on I mapped the dav-directory/repositoy as network drive and set the repository to push to "y:/"
When I try to push from Workbench the exception "aborted, ret 255" is thrown.
% hg --repository C:\wamp\www\ommon push y:
pushing to y:
searching for changes
abort: Y:\.hg/store/journal: The system cannot find the file specified
[command returned code 255 Thu Jun 20 12:08:28 2013]
Pushing from commandline throws:
pushing to y:\
searching for changes
abort: y:\.hg/store/journal: The system cannot find the file specified
Exception AttributeError: "'transaction' object has no attribute 'file'" in
<bound method transaction.__del__ of <mercurial.transaction.transaction object>>
I tried to alter the path to directory since the side-swapped dividers are looking strange to me. But it did not succeed.
Further information: I'm not using hgweb or any cgi-script based version.
EDIT Multiple google entries in reference to the issue left me with the idea that pushing changes to a repository provided by webDAV is not entirely possible. Further I have to use hgWeb to resolve that.
But why do I have to? My idea is that webDAV is capable of writing. Since i mapped the directory as a network drive - mercurial should be able to push changes on to the webserver likewise it does to a local directory.
Can someone confirm this?
Windows WebDAV support can be shaky. It's very possible that because of mercurial's likely advanced file-system operations, the OS does something incorrectly, or something apache's mod_dav cannot cope with.
It's also possible that something simpler is wrong, like apache blocking access to paths starting with a ..
You may be able to find something in your apache log, but I would recommend not doing this and use a true mercurial server instead.
Mercurial's http-repositories NEVER speak on WebDAV
You have to use any Mercurial-capable web-frontend for communication with repo or mount WebDAV-drive as local drive and access repository on it as repository on local FS

Bitbucket 502 on Push

I have a mercurial repository which halfway trough a 93meg push to bitbucket suddenly stops with a 502 bad gateway error.
IS there anyway I can get some more diagnostic information. This has had me stumped for days!
This isn't a great answer, but switching from HTTP to SSH might solve your problem.
(It did for me.)
See here for instructions:
https://confluence.atlassian.com/bitbucket/set-up-an-ssh-key-728138079.html

Push to a Mercurial repository (bitbucket) produces "Request Entity Too Large"?

What does it mean when you try to push to a Mercurial repository on Bitbucket and it produces the response:
abort: HTTP Error 413: Request Entity Too Large
Consider asking this the bitbucket team. In the meanwhile you could try using ssh access instead of http.
Looks like BitBucket has a size limit on HTTP uploads and you are exceeding them. It is probably a large file that you are uploading that is breaking things. Try excluding that one file and see what happens.
http://www.checkupdown.com/status/E413.html
http://forums.asp.net/p/1191089/2046229.aspx#2046229
It means that you've reached an upload limit set by Bitbucket or another thing as #TheSteve0 has pointed out.
If you are running behind nginx remember that the default upload size is 1MB. To increase it add the following to the "http" section of nginx.conf;
http {
...
client_max_body_size 10M;
Then restart nginx