hg archive to Remote Directory - mercurial

Is there any way to archive a Mercurial repository to a remote directory over SSH? For example, it would be nice if one could do the following:
hg archive ssh://user#example.com/path/to/archive
However, that does not appear to work. It instead creates a directory called ssh: in the current directory.
I made the following quick-and-dirty script that emulates the desired behavior by creating a temporary ZIP archive, copying it over SSH, and unzipping the destination directory. However, I would like to know if there is a better way.
if [[ $# != 1 ]]; then
echo "Usage: $0 [user#]hostname:remote_dir"
exit
fi
arg=$1
arg=${arg%/} # remove trailing slash
host=${arg%%:*}
remote_dir=${arg##*:}
# zip named to match lowest directory in $remote_dir
zip=${remote_dir##*/}.zip
# root of archive will match zip name
hg archive -t zip $zip
# make $remote_dir if it doesn't exist
ssh $host mkdir --parents $remote_dir
# copy zip over ssh into destination
scp $zip $host:$remote_dir
# unzip into containing directory (will prompt for overwrite)
ssh $host unzip $remote_dir/$zip -d $remote_dir/..
# clean up zips
ssh $host rm $remote_dir/$zip
rm $zip
Edit: clone-and-push would be ideal, but unfortunately the remote server does not have Mercurial installed.

Nope, this is not possible -- we always assume that there is a functioning Mercurial installation on the remote host.
I definitely agree with you that this functionality would be nice, but I think it would have to be made in an extension. Mercurial is not a general SCP/FTP/rsync file-copying program, so don't expect to see this functionality in the core.
This reminds me... perhaps you can built on the FTP extension to make it do what you want. Good luck! :-)

Have you considered simply having a clone on the remote and doing hg push to archive?

Could you use a ssh tunnel to mount a remote directory on your local machine and then just do standard hg clone and hg push operations 'locally' (as far as HG knows) but where they actually write to a filesystem which is on the remote computer?
It looks like there are several stackoverflow questions about doing this:
How do I mount a remote Linux folder in Windows through SSH?
Map SSH drive in Windows
How can I mount a remote directory on my computer?

I am often in a similar situation. The way I get around it is with sshfs.
sshfs me#somewhere-else:path/to/repo local/path/to/somewhere-else
hg archive local/path/to/somewhere-else
fusermount -r somewhere-else
The only disadvantage is sshfs is slower than nfs, samba or rsync. Generally I don't notice as I only rarely need to do anything in the remote file-system.

You could also simply execute hg on the remote host:
ssh user#example.com "cd /path/to/repo; hg archive -r 123 /path/to/archive"

Related

Permission denied inside /var/www/html when creating a website and it's files with the apache2 server

UPDATE** The screenshot is within atom, but when I navigate to the directory using the file explorer, and right click, the option to rename or create a new folder are restricted and I cannot click on them.
I just finished setting up the LAMP stack on my fresh UBUNTU 18.04 installation. I have everything working, the default /var/www/html/index.html page from Apache2 is being served on localhost, no port forwarding or any unique domain name, i just wanna run this on my network from my computer for now.
If there is a simple way to create multiple websites and easily choose which folder to serve than that's fine, but I want to serve just one website for now.
When I go to my /var/www/html folder and try to edit the index.html file it says permission denied. What do I need to do in order to work inside this directory for the remaining time that I am building the website. I am signed in as the root user on my system.
Also, if I do change permissions to allow me to work in this directory, what does it mean for people trying to access my server if it was available to the public. (RIGHT NOW JUST ON LOCALHOST).
Lemme know if you need more info or explanation thanks!
sudo chown -R $USER:$USER /var/www
this works, it changes the owner to my user instead of root user. I still don't understand because my user already had sudo rights and all those permissions. It was the user I created during the ubuntu18.04 setup, so there shouldn't be an issue, or idk.
File ownership issues can be fixed at the command line by typing:
sudo chmod 777 /var/www/html -R
One caveat from
turnkeyLinux.com:
Changing file permissions is a trade off
(often increasing security reduces user-friendliness and/or
usability). For security 'best practice' only the folders that require
write access by the webserver should be owned by the webserver.
If your webserver has write access everywhere and your server
is compromised it makes it easier to hack your WordPress install) but
for ease of use giving the webserver ownership should resolve all your
issues...
This article on Understanding File Permissions was great, too.
This will help you.
sudo chgrp -R www-data /var/www/html
sudo gpasswd -a username www-data
sudo chmod -R 777 /var/www/html
The permission error is occurring because the folder does not have the rights and rights are reserved with different user. (you can inspect this by doing ls -l folderName)
The solution for your problem can be handled in different ways following are the few :
WAY1:
Find out who is running apache by running the command apachectl
-S
Locate the user name (say www-data)
Change the ownership of your folder as chown -R www-data:www-data /var/www/html (this will allow only your apache
to play with files)
Run the following command ln -s /var/www/html /home/username/html (this will create a soft link for your folder,
where you can edit/delete/read which will reflect on your apache)
WAY2:
goto /var/www/
sudo chown -R www-data:${USER} html
(Now both apache and your loged-in user will have rights to play with file).
If you are not root or you don't have a permission on some things(folder,files..), know that your actions are limited.
Take folder as example:
first of all verify permissions of your folder
==>ls -ld linkto/folder
and after give it a permissions it need or type the command bellow to add all permissions
==>sudo chmod -R 777 /var/www/html
verify permissions of your folder again if it is correct then try to copy again
Just write
And give read and write rights to the folder ( not user )
Try the following the command
For Read and Write:
sudo chmod -R a+rw /var/www
For Read, Write and Execute:
sudo chmod -R a+rwx /var/www
Edit the file as root. Or better yet fix your permissions so you don’t have to worry.

I am using Ubuntu, XAMPP, MySQL, and Geany. Trouble using fopen();

So when I try to use:
fopen("sometext.txt", "w") or die("blahblahbla");
I keep on getting the following message:
failed to open stream: Permission denied". I have looked for other
answers on this site and none of them actually work.
Why is this doing this? Can somebody recommend a fix?
Do I have permission to create files in my directory? I get a bunch of advice on using chmod or changing the "file access", but how do you do this? They never explain that, just "oh use this or that".
If you have Terminal Access just fire a command in file's folder:
sudo chmod 777 sometext.txt ( For security reasons, later use correct chmod for permission)
if you dont have, you can modify File Attributes in your FTP client. ( Tick all fields (Execute-read-write) for Owner, Group, Everyone).
I hope it will solve your problem.
First, make sure you are in the apache group (check it with id username), then add your user to group apache (sudo usermod -G apache -a username) and then make sure the directory is in the group apache (check it with ls -l directory. I suppose the directory is /var/www/html or /srv/whatever, but XAMPP has its own. If not, do a sudo chgrp apache directory. Also, the directory must be writable by group members (chmod g+w directory).
Obviously in the apache configuration must be the apache user and group. If they doesn't exist, create them (sudo groupadd apache and sudo useradd apache).
P.S: chmod 777 is evil! It's better to be in the apache group and avoid making your file be edited by someone else!

OwnCloud: How to synchronyze the FileSystem with the DB

I have to "insert" a lot of files into an owncloud server (8.2).
A user give me a USB key with the files and tell me to copy of all them into his owncloud data files repository.
Do you know if is it possible ?
Is it possible to synchronyze the ownCloud data fileSystem with the ownCloud database?
My environment is Linux CentOS7 (Apache 2.4, mySQL 5.6, php 5.6)
Thanks,
owncloud brings a command line utility that allows to manually trigger some tasks. Among those is the files:scan function which re-scans a users file system.
So you can import those files by following these steps:
1. you copy the files into the physical file system of the user(s) inside ownclouds data folder
2. you fire the command line utility to re-scan the files. That takes care to update the database according to the files found.
This is an example for the manual trigger:
sudo -u www-data php occ files:scan <user name>
Here <user name> obviously has to be replaced. Also the account name the sudo command switches to depends on the linux distribution and its setup. The command has to be started inside ownclouds base folder. THe command can be called in a loop with different user names, that can be done by means of standard scripting.
Here is a documentation of the utility: https://doc.owncloud.org/server/8.0/admin_manual/configuration_server/occ_command.html
I just made a try myself using an owncloud-8.2 installation and succeeded.
Before I could sucessfully scan my files again as arkascha explained, I needed to change the ownder and the group of the new folder to www-data (for Debian OS - others see OC-Docu 1) and set rights of the new directory to 755
Change ownder:
sudo chown -R www-data:www-data <path>
Change rights:
sudo chmod 755 <path>
whwere is the path to the newly added directory and could for example look like this example: /media/hdd/owncloud/data/<username>/files/<newFolderName>
OC-Docu:
https://doc.owncloud.org/server/9.0/admin_manual/configuration_server/occ_command.html

Move local Mercurial repository to Bitbucket

Has anyone taken a local repo and imported it into Bitbucket? When I go to do this, the Import page asks for a URL, but I'm working on a local computer that does not have port 8000 open to the outside world.
Can I just use some special form of a file path?
First you need to create a repository in Bitbucket, go to Repositories -> create repository. Then you can choose between HTTPS or SSH.
You can customize your hgrc file like this:
[ui]
username = Your Name <youremail#example.com>
[paths]
myproject = https://.. # The one provided by Bitbucket
Now you can just push your changes to the repository:
$ hg commit -m "my changes"
$ hg push myproject
Or pull changes:
$ hg pull -u myproject
The -u option will also update your local repository after pulling the changes. You can use this option instead of pulling and then updating your local repository. The -u option is the same as doing:
$ hg pull myproject
$ hg update
You may also want to take a look to the hgignore file.

mercurial-server: Password is asked for ssh

I'm trying to manage my mercurial repos on my server (Debian Lenny) with mercurial-server from LShift. I was using this tutorial: http://kurtgrandis.com/blog/2010/03/20/gitosis-for-mercurial/
But when I try to clone the hgadmin repo, ssh asks me for a password.
hg clone ssh://hg#MyMercurialServer/hgadmin
But I never had set a password for the hg user. It was created using the apt-get installation.
Normally, the authentication should be done with my public ssh key (which was copied to the keys/root directory from mercurial-server). But it seems, that mercurial-server don't uses my public key.
I also flushed the privileges with
sudo -u hg /usr/share/mercurial-server/refresh-auth
After copying the public key to the mercurial-server keys/root dir. Furthermore, I can't find any logfiles for mercurial-server.
Does anybody know, how to fix that?
Thanks.
zerkms, is correct -- debug the ssh directly first. Try something like:
ssh -v -v hg#MyMercurialServer
That'll let you know if your key is being sent and rejected or not sent. Also try adding -i path/to/private/key on the client to force sending the key.
The usual config problem in ssh key setups is permissions on the authorized_keys file on the ssh server side. It needs to be 0600 and the directory its in needs to be 0700. You can debug that stuff in /var/log/messages on the server side, where sshd will print a message if it's unwilling to trust the authorized_keys file due to permissions.