Sourcetree constantly asks for authenticate with Mercurial (Kiln)? - mercurial

I'm running Lion latest with SourceTree.
I tried to connect to a Fogbugz Kiln reponsitory, which succeeded, but whenever I PUSH or PULL Sourcetree consistently asks me for a username and password despite I saved it to my KeyChain. Anyone have any insight to this issue?

This helped me solve this issue in a mac:
Open the Hosted Repositories window by clicking View > Show Hosted Repositories or Command + Shift + H.
Click Edit Accounts
Double-click on your account
Click Set Password

I had the same and I fixed it using the system git instead of the embedded one:
Settings -> git -> Use System git

Open terminal and Type git config --global credential.helper osxkeychain
Allow access when asked. Make a pull from sourcetree, you may have to enter password one more time after that it wont ask again from next time onwards.
PS: This solution is only for Mac OS

I occasionally run into the same problem. None of the methods listed here actually helped me out, but after I restart my computer, I am again able to do as I please with SourceTree and git.
Nevertheless this issues is annoying as hell and seeing that Atlassian haven't resolved it in over 3 years, since the original question was posted, is even more unnerving.

I had the same problem and it troubled me for a long time, but I found a solution:
Go to terminal in your project folder.
Run #git pull
Input your username and password
Go back to sourceTree and run Fetch or Pull, it does not ask for your password again.

I met the same problem, what I did is
Open Keychain Access
Find the corresponding keychain entry for your repo, and double click to open (e.g. the entry with name github.com)
Click the 'Access Control' tag
Select 'Allow all applications to access this item' and save changes
This solves the problem (or at least for me), but in some sense makes it less secure though.

As said by Laurens in the comment, you can file an issue with us via jira.atlassian.com (project SRCTREE). It shouldn't constantly ask for authentication if you've saved your credentials to the keychain unless there's an authentication problem.
Cheers

Wasted 90 minutes on all this. Sourcetree simply would not let me remove my account and add it back. Finally uninstalled and downloaded an older version:
https://www.sourcetreeapp.com/download-archives
ver 1.10 fixed all my issues:

Related

SourceTree - Mercurial - Authentication - requesting user name and password each time?

I use SourceTree with a local Mercurial server, the problem is that SourceTree is asking to authenticate at each operation. By example, for a clone it can be 10 times entering user/pwd ...
Though, I enter user/pwd and checked the "remember" checkbox, it continue to ask.
I have see that I can use SSH, but I have no access to the repository web page (it is a local server) to setup the SSH key.
1 - I tried to setup an account in SourceTree using Options>Authentication
Using the option "Bitbucket server" and entering our server URL. In fact, with this method I can even enter my password, it just failed !
2 - By using an URL like this : https://username:password#serverurl
3 - Using the Windows Credentials Manager !
4 - I edited the .hgrc file
Does someone has an idea ?
I was unable to solve the issue, then I use TortoiseHG and this tool works !
So, it looks like a bug in SourceTree !
You can also switch for SourceTree version 1.6.23, this one will work.
This solution on the Atlassian Community solved the issue for me (edited for typos and clarity):
For all who are using SourceTree under Windows OS and Mercurial as Versioning tool and want to get rid of the boring popup asking for your credentials:
Start cmd shell as admin
Change path to where git-credential-manager.exe as been installed (normaly under ~\AppData\Local\Atlassian\SourceTree\git_extras)
Call "git-credential-manager.exe store"
In the next lines fill in:
protocol=https
host=code.domain.name
username=yourLoginName
password=yourLoginPassword
Press return again for a new empty line. If you don't get any message, everything is okay.

Google Chrome Ignoring Hosts File

Google Chrome is ignoring the settings in C:/Windows/System32/drivers/etc/hosts file. Both IE11 and Firefox are installed on the same machine and work as expected.
I've tried all the solutions I could find online including:
Open chrome://net-internals/#dns and click the Clear Hosts Cache button.
Go in Settings, Show Advanced Settings and uncheck the following three options: (X) Use a web service to help resolve navigation errors (X) Use a prediction service to help complete searches and URLs typed in the address bar (X) Use a prediction service to load pages more quickly
Go in Settings, Show Advanced Settings, click the Clear Browsing Data button, selected Cached Images And Files from the beginning of time, and click Clear Browsing Data.
Restart Chrome.exe.
Restart the computer.
Make sure to add http:// to the front of the web address.
Make sure proxy settings are turned off
Run cmd.exe and run ipconfig /flushdns
Uninstall and reinstall Chrome
I'm at a loss... Is there anything I missed that I can try or check?
Seems that Chrome doesn't likes the following extensions for that kind of stuff:
.dev
.localhost
.test
.example
.app
Use .local and the problem seems to disappear.
If anyone stumbles on this problem in 2021, for me the fix was to disable Use secure DNS option from chrome settings. After disabling that, all the options in the hosts file started working.
The option is located under Privacy and Security > Use secure DNS
Link to get there faster:
chrome://settings/security
This has been identified as a "bug" in Chrome, but it appears to be absolutely intentional behavior. Google Chrome does not honor /etc/hosts when connected to the Internet. It always does a DNS lookup to determine IP addresses.
While my references below mostly relate to my expereinces with this on Linux, it is not confined to Linux.
https://groups.google.com/a/chromium.org/forum/#!topic/net-dev/iKXqyc40tW0
https://superuser.com/a/887199/75128
https://bugs.chromium.org/p/chromium/issues/detail?id=117655
Okay I faced the same problem but then I found the solution.
Try this:
Go to history (Ctrl+H) -> In the left pane click on Clear browsing data
In the new window that opens go to Advanced tab
Set Time Range to All Time -> check Cached Images and Files -> click on Clear data
Restart your computer, It should start redirecting addresses mentioned in Hosts file (C:\Windows\System32\drivers\etc\hosts)
Note: This Solution is only for Google Chrome
Try clearing the DNS Cache:
1) run cmd.exe as administrator
2) type: ipconfig /flushdns
I just encountered this tonight and none of these options worked. I discovered that Chrome now hides "www" (https://www.howtogeek.com/435728/chrome-now-hides-www-and-https-in-addresses.-do-you-care/). Chrome was using my hosts file, but I had to add "www." to my hostname in my hosts file since that's what the browser is actually requesting, even if it doesn't show it.
A little late, but after hours i find a solution. It seems that Google Chrome sometimes has problems on recognize the name of the hosts defined en /etc/hosts.
I'm using linux and i'm behind a proxy.
Try adding at the end of the name server: .localhost
Example:
At: /etc/hosts:
127.0.0.1 myservername.localhost
On the virtual-hosts of your server configuration you'll need to rename the server name. In my case, i'm using apache so at /etc/apache/sites-enabled/myserver.conf rename the line of the old server name with:
...
ServerName: myservername.localhost
If you are behind a proxy, you can except all the hosts just adding to the no_proxy vars:
$no_proxy= "localhost"
Finally don't forget to restart the server and try to access on the browser with the new server name.
😊 simple answer 😊
there are 3 workarounds about this:
1- deleting Visited Links binary file (beauty👍)
2- using .local or .app instead of your desired TLD (standard & preferred by chrome docs but i don't like it)
3- restarting your computer (ugly👎)
deleting Visited Links binary:
kill all chrome tasks (close all chrome windows:))
delete C:\Users\[USERNAME]\AppData\Local\Google\Chrome\User Data\Default\Visited Links binary
you can define a function in your shell profile to perform this fast and just by a command whenever you face this issue: e.g:
function respectHosts () {
$path = $HOME + "\AppData\Local\Google\Chrome\User Data\Default\Visited Links";
Remove-Item $path;
}
important Note:
it is suggested that first time after deleting Visited Links binary file, also delete your history cause if you use a url from history, actually you are using the cached dns of that url too:
Running Chrome 105 on Windows 11, nothing seemed to work until I added ::1 (i.e. ipv6) in addition to 127.0.0.1. For example:
127.0.0.1 local.foo.com
::1 local.foo.com
While it was stated that no proxy is being used, I have had the same issue on OS X while using a proxy and the eventual solution was to add a proxy-exception for this domain.
What the OP could try is turn off async DNS via command-line switch as
mentioned here in 2015:
Async DNS: Remove toggle from about:flags
Async DNS is fairly stable at the moment, so we don't really need the
toggle in about:flags anymore. (Note that the --enable-async-dns and
--disable-async-dns command-line flags will still work for now.)
This, however, seems to have no effect in my case, as chrome://net-internals/#dns still displays the internal DNS-client as enabled with no obvious way to turn it off.
Had a similar issue working from a windows based server that had proxy settings. In the proxy advanced settings there are 2 options that can help. Ignore proxy setting for local hosts which is a check box; as well as a list of addresses set off my semi-colons where you can except out certain IP destinations. This fixed my issue.
For me
chrome://net-internals/#sockets
Flush socket pools work wonder, credit: https://superuser.com/a/611712

Phpmyadmin shows blank page

Recently we have upgraded our mysql from 5.5.x to 5.6.x in an Ubuntu 12.04, also we have changed php5-mysql library with php5-mysqlnd(Which is recommended from MySql).
Since our change in library phpmyadmin stopped working and shows a blank page.
I have followed so many forum and advises from forum contributor but have not had success so far.
I also used ubuntu repository as suggested in this other topic at stackoverflow ppa:nijel which I believe has the modified phpmyadmin package and include the support of php5-mysqlnd, but still no success.
I also have enabled highest verbosity on the php.ini but still nor error or warning is generating in any log, using chrome developer tools it shows "500 Internal Server Error".
I am clueless now, if anyone can help me to determine what I may be missing obvious.
I just ran into a very similar error and I thought I would leave my solution here in case anyone finds this searching for my error. The difference was, that phpMyAdmin showed a blank page after a successful login.
The solution was, removing the "X-Frame-Options: Deny" header.
When setting up the webserver, I didn't remember that phpMyAdmin would rely on iFrames to serve its interface.
Check Mysql & Php error log files located at /var/log/mysql/error.log and /var/log/apache2/error.log respectively
Even after multiple uninstall, purge and reinstall of phpmyadmin I did not get a success.
Finally I used bruit-force approach and from another Linux server where phpmyadmin was running properly I copied all i.e. /usr/share/phpmyadmin, /etc/phpmyadmin and /var/lib/phpmyadmin folders and over write them in the problematic server.
Every thing works perfectly now.
Thanks for help Vibhas... I just thuoght to post just for someone's help.
In case someone (like I was) is using phpmyadmin via xampp on windows and has skype turned on - try turning skype off (or configure xampp to use another port).
I was facing a similar issue with phpmyadmin (phpmyadmin was returning a blank page). It was alright once I reinstalled phpmyadmin using:
apt-get install --reinstall phpmyadmin
If you are not root:
sudo apt-get install --reinstall phpmyadmin
When i changed this inside the config.inc.php:
$cfg['UploadDir'] = '/tmp';
$cfg['SaveDir'] = '/tmp';
to:
$cfg['UploadDir'] = '';
$cfg['SaveDir'] = '';
and reloaded the page then it worked directly!
This is a long shot, but it happened to me when I realised after a cPanel password reset I did not check the "Synchronize MySQL password" option.
This used to be defaulted as checked in previous cPanel versions.
To fix all I did was reset my cPanel password again and selected the "Synchronize MySQL password" checkbox and PhpMyAdmin was back.
Hope this helps others, and if its causing a big issue cPanel should select this option by default again.
I had a similar experience after installing mysql 5.5.60 and phpmyadmin 4.2.12 and what did the trick for me was changing folder ownership for /var/lib/php/session which was the solution in another case for getting php-scripts tu run in general. In my case user www-data is trying to execute related scripts, so I set folder permissions accordingly
chown -R www-data:www-data /var/lib/php5/
and the page appeared as expected.
A web search for [phpmyadmin blank page] shows many people are having this problem and there are almost as many different solutions proposed. So let me add one more after spending a day and finally having success:
When the blank page was displaying, I opened Developer Tools (Command-Option-i on Mac in Chrome or Brave). I immediately observed multiple instances of failing to load .js files from phpmyadmin/js/dist. Checking that directory I found that it was indeed empty.
I then went to https://www.phpmyadmin.net/ and downloaded the zip file into a different location. When I unzipped it, I found js/dist did in fact contain many .js files. I copied all of these files to my webserver's phpmyadmin/js/dist directory. And the problem was solved! I now have a working page.
I hope that helps some of you.
Perhaps I should add that I installed phpmyadmin using the Composer install method (% composer create-project phpmyadmin/phpmyadmin). I did it a second time in an offline directory and again the js/dist directory was empty. I don't know if that means I did something wrong or if that install method is broken.

Why does Mercurial return "Abort: Access is Denied" when trying to push a repository?

I'm running into a problem with a user not being able to push his commits into a Mercurial repository and am perplexed as to why it's not working for him. I've tried several things to figure out what's up, Googling doesn't turn up anything helpful... so here I am.
First, the configuration. We have a Windows XP SP2 x64 machine on our network acting as our official repository server. This contains several repositories on it. We clone / push / pull using a folder on that drive that is shared. Permissions are given to everyone for read access. Users that can push (including the user that has problems), are given full control. The user's machine is Win XP based. My machine (used to help troubleshoot things) is also Win XP based.
Second, the symptoms. The user is using TortoiseHg 2.1.1 to do his work. He can clone just fine, commits to his local repo are a-o-k, etc. When he tries to push, however, TortoiseHg returns a "abort, ret 255" code. Not very helpful. So, we've gone to the command line and have issued "hg push -v --debug". Here it returns "abort: Access is Denied". This same user can write to the server's shared folder no problem -- he can create files, directories and delete the same as well. So, reading / writing access the drive / folder is not an issue.
Third, our experimentation results. Here are some weird results from testing. The user created a new, local test repo. I logged into the server machine and created a test repo for him to push to. The user checked in a file and then pushed it up to the test repo on the server machine. This worked fine. No aborts. Life was good. He was able to do a few more pushes and it continued to work as expected. I then cloned the repo to my machine, updated a file, and pushed it back out. After the user then pulled in my changes and tried to push back to the server, he once again encountered the dreaded "Access is Denied" message. Meanwhile, I can still update the project without any problems.
As another experiment, we had the user log out and another user log in. They did so and were able to push to the server repo without a problem. Original user logs back in, makes some changes, etc. and once again hits the brick wall of "Access is Denied".
As far as we can tell, the problem is not related to Windows credentials. Otherwise, we'd expect that creating arbitrary files on the server's shared folder would not work. Further, until I made an update to the test repo the user created, he could push to that particular repo just fine.
Any ideas? What additional credential checks is Mercurial making that might cause this?
UPDATE:
After a tip from Wim, I started to look at the permissions on the various objects of the repo using 'cacls'. This is a Windows tool that "displays or modifies access control lists of files". I had the user create a new repo and then took a snapshot of the permissions. I then checked in a file to the same repo and took another snapshot of the changes.
It turns out that there are several repo file permissions that get updated as a result of this: undo.bookmarks, undo.branch, undo.desc, undo.dirstate, branchheads, 00changelog.i, 00manifest.i, undo, and the single file of the repository. All of these files had permissions similar to the following:
C:\Projects\Mercurial\hgtest4\.hg\store\undo BUILTIN\Administrators:F
NT AUTHORITY\SYSTEM:F
DOMAINxxxx\USERIDxxxx:F
BUILTIN\Users:R
(actual DOMAINxxxx and USERIDxxxx values have been altered). Prior to my check-in, DOMAINxxxx & USERIDxxxx reflected the user's domain and userid. After my check-in, these got updated to mine (we're on the same domain, but the userid is obviously different.) I was able to check things in and out even though my userid wasn't listed because I'm a member of the BUILTIN\Administrators group. The user with the problem is not. So, I'm guessing that after I checked things in, the system no longer saw him as a credentialed user with write-access (BUILTIN\User:R indicates Read-only access) and therefore caused the access denial.
I've got a terribly Q&D fix in place right now (user is now part of the Admin group...) The real fix is going to be to get the repo off of Windows Sharing and on to a proper server configuration.
He was able to do a few more pushes and it continued to work as expected. I then cloned the repo to my machine, updated a file, and pushed it back out. After the user then pulled in my changes and tried to push back to the server, he once again encountered the dreaded "Access is Denied" message.
It sounds like your push creates or modifies files in the .hg folder in such a way that they are (or become) inaccessible for the other user.
I'm no expert on NTFS file permissions, but I think you can fix such situations by forcing all the content of the folder to inherit its permissions. Try selecting "Replace all child object permissions with inheritable permissions from this object" in the Advanced Security settings of the folder.
However, sharing the repository files directly with Windows file sharing is not recommended. You need a server process between the users and the repository files for the sake of performance, data integrity and security. Without such a gatekeeper, granting commit access also means granting the ability to destroy/corrupt the repository files (or as you found out in this case, changing their permissions).
See Publishing Mercurial Repositories on the Mercurial wiki for more information about other options.
When trying to commit on my locally cloned repository of a code repo on my network share, I was getting the same error message:
00manifest.i Access is denied
Probably overly simplistic, but removing some of the read only permissions to the offending files made my hg commit work fine.
I just had the same issue abort: Access is denied. The cause was my firewall (Privatefirewall) silently blocking some actions of hg.
I was getting exactly the same error message when trying to hg push at the windows command prompt. I'd recently received a new user profile after the old one had corrupted. I then ran into this "Access Denied" error. In TortoiseHg I received a similar message of "Aborted: Error 255".
I tried the advice given here by Wim Coenen as it seemed to fit; given my new user credentials. Eventually, I tracked the error to a badly installed Windows Git. It was only failing when I used repositories with git sub-repos.
In case others are having a similar problem with Git sub-repos:
Check that Git is installed correctly. I removed and re-installed it completely. (See: https://code.google.com/p/msysgit/downloads/list for latest version).
Ensure that the path to Git is in the path environment variable. (Right-click My Computer -> Advanced tab -> Environment Variables). Don't forget that some applications do not like windows paths with spaces in so you might need to replace "Program Files" with "PROGRA~1" (possibly "PROGRA~2" on 64-bit systems).
If you are using a proxy, ensure that HTTP_PROXY and HTTPS_PROXY in the environment variables are also set correctly.

Unable to get email-ext.hpi to work in hudson

I have just setup hudson and have begun playing around with it.
I have downloaded the email-ext.hpi into the the folder $HUDSON_HOME\plugins
I have restarted hudson post-step1 ( i am following this manual method as i am unable to use (for proxy setting reasons) the automatic way of installing plugins via the "Manage hudson" page.
I dont see any errors when hudson starts. In fact i see the line
INFO: Started all plugins
BUT:
When i start a project configuration page, I do not see the promised option "Editable Email Notification".
FYI:
1. I am able to setup and run few basic test builds and they run fine.
2. I am also able to configure and receive the default hudson emails for failures and subsequent successes.(This confirms the SMTP settings)
3. I was also aboe to setup the subversion tag hpi in the same way as detailed above and that works fine as well!
What am i missing? Thanks in advance for any help!
EXTRA INFO:
Hudson version - 1.379 running on Windows XP
OK - i figured out a workaround (although i still need to dig into why this is a problem). Recording here for anyone else tha tmay face this issue.
The plugin when copied into the $HUDSON_HOME\plugin was somehow not really being activeated/recognized. But when i copied it over also to C:\Documents and Settings\mylogin.hudson\plugins and restarted hudson service, voila! it worked.
If anyone knows why this might have occured, kindly record it here for reference. Thanks.
To install a plugin you should use the easy route. In Hudson, go to 'Manage Hudson' -> 'Manager Plugins' -> 'Advanced' (its a tab) and use the 'upload plugin' option.
Than follow the instructions. Usually you have to restart Hudson to actually get the plugin.
Way saver than messing around with the file system. In general the approach you had should have been correct, but there seems to be an issue with your $HUDSON_HOME. Have a look at the "Manage Hudson" -> "Configure System" page. What is the Hudson Home directory displayed on the top of the page? I don't know what Hudson does if it can't access the Home Directory? My assumption is here that Hudson runs as a service with a user account rather than the local system account and that you used a different account to copy the hpi file.
Install Maven Legacy and Maven3 plugins .