using sudo with mercurial and ssh authentication - mercurial

How do i run
ssh-add key
sudo hg clone hg#bitbucket.org/etc/etc
but use my ssh keys and not the superusers.
Hey everyone, when i use sudo with for example, sudo hg clone hg#bitbucket.org/etc/etc after i have added a key to my user account it doesnt work. I remember this is because the sudo is ran as the superuser but that user cannot have keys added to it. I remember setting some directive (im using debian) that allowed me to run that command as sudo, but still have my ssh keys taken from my normal user account but i didnt make a note of it at the time. Thanks.

The answer by Ry4an pointed me in the right direction but I found that with the current version (1.6.4) of Hg at least, you need to put --ssh (or the equivalent -e) after the hg command.
e.g.
hg clone -e 'ssh -i /path/to/key' ssh://user#host/path

I see you found a way to have the sudo user chain off to your main user's key, but rather than using ssh-agent for something like that you're more secure explicitly specifying the key:
sudo hg --ssh '/usr/bin/ssh -i /path/to/private.key' clone hg#bitbucket.org/etc/etc

Related

Permission denied inside /var/www/html when creating a website and it's files with the apache2 server

UPDATE** The screenshot is within atom, but when I navigate to the directory using the file explorer, and right click, the option to rename or create a new folder are restricted and I cannot click on them.
I just finished setting up the LAMP stack on my fresh UBUNTU 18.04 installation. I have everything working, the default /var/www/html/index.html page from Apache2 is being served on localhost, no port forwarding or any unique domain name, i just wanna run this on my network from my computer for now.
If there is a simple way to create multiple websites and easily choose which folder to serve than that's fine, but I want to serve just one website for now.
When I go to my /var/www/html folder and try to edit the index.html file it says permission denied. What do I need to do in order to work inside this directory for the remaining time that I am building the website. I am signed in as the root user on my system.
Also, if I do change permissions to allow me to work in this directory, what does it mean for people trying to access my server if it was available to the public. (RIGHT NOW JUST ON LOCALHOST).
Lemme know if you need more info or explanation thanks!
sudo chown -R $USER:$USER /var/www
this works, it changes the owner to my user instead of root user. I still don't understand because my user already had sudo rights and all those permissions. It was the user I created during the ubuntu18.04 setup, so there shouldn't be an issue, or idk.
File ownership issues can be fixed at the command line by typing:
sudo chmod 777 /var/www/html -R
One caveat from
turnkeyLinux.com:
Changing file permissions is a trade off
(often increasing security reduces user-friendliness and/or
usability). For security 'best practice' only the folders that require
write access by the webserver should be owned by the webserver.
If your webserver has write access everywhere and your server
is compromised it makes it easier to hack your WordPress install) but
for ease of use giving the webserver ownership should resolve all your
issues...
This article on Understanding File Permissions was great, too.
This will help you.
sudo chgrp -R www-data /var/www/html
sudo gpasswd -a username www-data
sudo chmod -R 777 /var/www/html
The permission error is occurring because the folder does not have the rights and rights are reserved with different user. (you can inspect this by doing ls -l folderName)
The solution for your problem can be handled in different ways following are the few :
WAY1:
Find out who is running apache by running the command apachectl
-S
Locate the user name (say www-data)
Change the ownership of your folder as chown -R www-data:www-data /var/www/html (this will allow only your apache
to play with files)
Run the following command ln -s /var/www/html /home/username/html (this will create a soft link for your folder,
where you can edit/delete/read which will reflect on your apache)
WAY2:
goto /var/www/
sudo chown -R www-data:${USER} html
(Now both apache and your loged-in user will have rights to play with file).
If you are not root or you don't have a permission on some things(folder,files..), know that your actions are limited.
Take folder as example:
first of all verify permissions of your folder
==>ls -ld linkto/folder
and after give it a permissions it need or type the command bellow to add all permissions
==>sudo chmod -R 777 /var/www/html
verify permissions of your folder again if it is correct then try to copy again
Just write
And give read and write rights to the folder ( not user )
Try the following the command
For Read and Write:
sudo chmod -R a+rw /var/www
For Read, Write and Execute:
sudo chmod -R a+rwx /var/www
Edit the file as root. Or better yet fix your permissions so you don’t have to worry.

I am using Ubuntu, XAMPP, MySQL, and Geany. Trouble using fopen();

So when I try to use:
fopen("sometext.txt", "w") or die("blahblahbla");
I keep on getting the following message:
failed to open stream: Permission denied". I have looked for other
answers on this site and none of them actually work.
Why is this doing this? Can somebody recommend a fix?
Do I have permission to create files in my directory? I get a bunch of advice on using chmod or changing the "file access", but how do you do this? They never explain that, just "oh use this or that".
If you have Terminal Access just fire a command in file's folder:
sudo chmod 777 sometext.txt ( For security reasons, later use correct chmod for permission)
if you dont have, you can modify File Attributes in your FTP client. ( Tick all fields (Execute-read-write) for Owner, Group, Everyone).
I hope it will solve your problem.
First, make sure you are in the apache group (check it with id username), then add your user to group apache (sudo usermod -G apache -a username) and then make sure the directory is in the group apache (check it with ls -l directory. I suppose the directory is /var/www/html or /srv/whatever, but XAMPP has its own. If not, do a sudo chgrp apache directory. Also, the directory must be writable by group members (chmod g+w directory).
Obviously in the apache configuration must be the apache user and group. If they doesn't exist, create them (sudo groupadd apache and sudo useradd apache).
P.S: chmod 777 is evil! It's better to be in the apache group and avoid making your file be edited by someone else!

Solaris 10 sudo configuration Issue

I am using SunOS 5.10 Generic_147441-24 i86pc i386 i86pc
if i run
which sudo
i get the below
/opt/sfw/bin
when i run "sudo -l" i get the below
User localuser may run the following commands on this host:
(root) NOPASSWD: /sbin/ifconfig
for "visudo"
visudo
-bash: visudo: command not found
also /etc/sudoers file does not exist in the box.
Please help me configure sudo, how it is possible with out the sudoers file.
Perhaps you should have a look at Sun (Oracle) RBAC for accounts, rather than rely on sudo in Solaris? It is unclear from your post why you must use sudo, but if you are not calling sudo from a script, it might be worth your while to read: http://docs.oracle.com/cd/E23824_01/html/821-1456/rbac-1.html
I've never seen the sudo binary exist in /opt, so my first thought would be that your visudo binary is not in your path, or the sudo package you installed does not contain the visudo binary. Either way you may consider downloading the sudo package again and reinstalling.
To see if your visudo binary exists anywhere:
find / -name visudo -print
If you find nothing, remember you do not explicitly need visudo to use sudo -- it's there as a checkpoint for making sure that you do not save and exit a sudoers file that has errors, thus possibly compromising your ability to edit it again or to break sudo for all users on the host.
Also note that /etc/sudoers can start off empty, just fill it in with your sudo rules. For example, to provide sudo all commands on that host for a user without prompting for a password:
userid ALL=(ALL) NOPASSWD: ALL
That particular user ID can run "sudo -l" to list the sudo rules available to it. You could do this even just to test that sudo is in fact working on your host.
You could easily get the location of the sudoers file from sudo binary itself by doing this
cat $(which sudo) | strings | grep /sudoers
Then, you would know what file to modify.

mercurial-server: Password is asked for ssh

I'm trying to manage my mercurial repos on my server (Debian Lenny) with mercurial-server from LShift. I was using this tutorial: http://kurtgrandis.com/blog/2010/03/20/gitosis-for-mercurial/
But when I try to clone the hgadmin repo, ssh asks me for a password.
hg clone ssh://hg#MyMercurialServer/hgadmin
But I never had set a password for the hg user. It was created using the apt-get installation.
Normally, the authentication should be done with my public ssh key (which was copied to the keys/root directory from mercurial-server). But it seems, that mercurial-server don't uses my public key.
I also flushed the privileges with
sudo -u hg /usr/share/mercurial-server/refresh-auth
After copying the public key to the mercurial-server keys/root dir. Furthermore, I can't find any logfiles for mercurial-server.
Does anybody know, how to fix that?
Thanks.
zerkms, is correct -- debug the ssh directly first. Try something like:
ssh -v -v hg#MyMercurialServer
That'll let you know if your key is being sent and rejected or not sent. Also try adding -i path/to/private/key on the client to force sending the key.
The usual config problem in ssh key setups is permissions on the authorized_keys file on the ssh server side. It needs to be 0600 and the directory its in needs to be 0700. You can debug that stuff in /var/log/messages on the server side, where sshd will print a message if it's unwilling to trust the authorized_keys file due to permissions.

"No such repository hgadmin" while installing mercurial-server.

I'm trying to install mercurial-server. After adding my keys to keys/root and refreshing auth, I tried to clone hgadmin-repo but I get the following error:
$ hg clone ssh://hg#<domain>/hgadmin
remote: mercurial-server: no such repository hgadmin
abort: no suitable response from remote hg!
Anyone know what's the problem?
I had this same problem and for me it was a problem with the installation of the hgadmin repository. When I installed the package, I got errors from python saying the mercurial package wasn't installed. I assume that happened when mercurial-server tried to initialize the hgadmin repository. So when I went to checkout the hgadmin respistory, there was no .hg directory:
root#myshost:/var/lib/mercurial-server/repos# cd hgadmin/
root#myshost:/var/lib/mercurial-server/repos/hgadmin# ls -a
. ..
In order to resolve this, I did:
easy_install mercurial
sudo apt-get purge mercurial-server
sudo rm -rf /var/lib/mercurial-server
sudo apt-get install mercurial-server
And then continued on with the directions here:
http://kurtgrandis.com/blog/2010/03/20/gitosis-for-mercurial/
Thanks a lot Randy for exposing the exact issue here.
I struggled with the same problem, and found an alternative approach to solving it (without the need to purge and re-install).
You can initialize the hgadmin repo manually and install the hooks, achieving the same effect as a normal installation. You need to to it as 'hg' user though.
Procedure
The commands worked for my environment (Ubuntu 10.04.4 / Hg 1.4.3)
First initialise a mercurial repository in /var/lib/mercurial-server/repos/hgadmin :
$ sudo su hg
$ cd ~/repos/hgadmin/
$ hg init
Then the only difference I found with a normally initialized hgadmin repo (that I deployed in a VM for comparison) were the hooks in .hg/hgrc file. So open the file :
$ vim .hg/hgrc
and paste this exact content :
# WARNING: when these hooks run they will entirely destroy and rewrite
# ~/.ssh/authorized_keys
[extensions]
hgext.purge =
[hooks]
changegroup.aaaab_update = hg update -C default > /dev/null
changegroup.aaaac_purge = hg purge --all > /dev/null
changegroup.refreshauth = python:mercurialserver.refreshauth.hook
Are you sure your clone command syntax is correct? I see at least two errors in it:
You must put the repo you're cloning (not just the destination)
Just as for push, you must use two slashes before hgadmin:
Example FAILING (missing the source repo and using only one '/' before 'home')
$ hg clone ssh://John#127.0.0.1/home/John/delme
Example FAILING (missing the source repo)
$ hg clone . ssh://John#127.0.0.1/home/John/delme
Example SUCCEEDING:
$ hg clone . ssh://John#127.0.0.1//home/John/delme