I am attempting to run a Windows batch script nightly to pull a fresh copy of data to my local hard drive from a Mercurial repository, overwriting any data I have locally. The server on which the repository is located has many repos, so is located in a sub-directory on the server. I have set up PuTTY to use an RSA key so when I log onto the server with PuTTY, I need only enter my username.
The batch script has a command:
hg pull ssh://myusername#mydomain.com/targetrepo/
...but this only opens a prompt for me to enter my password. Normally, this would be fine but because the pull will be executed from a batch script, I need the RSA key authentication to work.
How do I allow a batch script in a subdirectory on the server that contains a Mercurial repository to execute without requiring entry of a password?
You said it yourself -- you need the RSA key authentication to work. So you'll need to debug why that isn't working. The easiest way would be to see the sshd logs on the server side. It'll probably be one of
Your key isn't on the server
The ~/.ssh directory or its contents' permissions on the server are wrong
The SSH daemon on the server doesn't allow passwordless access
It's not actually asking for a password at all; it's asking for a passphrase for your key
Related
I have created a JSON template to create the Amazon AWS LAMP stack with RDS (free tier) and succeffully created the stack. But when I tried to move the files to the var/www/html folder it seems to have no permission for the ec2-user. I know changing permission with help of SSH. But my intention is to create a template to setup a stack (hosting environment) without using any ssh client.
Also I know how to add a file or copy a zipped source to var/ww/html with the cloudformation JSON templating. What need to do is, just create the environment and later upload the files using ftp client and db using workbench or something. Please help me attain my goal, which I will share publicly for AWS beginners who are not familiar with setting up things with SSH.
The JSON template is a bit lengthy and so here is the link to the code http://pasted.co/803836f5
use the Cloud formation init Meta instead of Userdata.
That way you can run commands on the server such as pulling down files from S3 and then running gzip to expand them.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html
tar files and distribution dependent files like .deb or .rpm include the file permissions for directories. So you could set up a tar or custom .rpm file to include ec2-user as the owner
Alternatively, whatever scripting element installs the apache could also run a set of updates to set the owner of the /var/www/html to ec2-user
Of course you might run into trouble with the User / Group that apache runs under and be able to upload with ftp but not able to read with apache. It would need some thought, and possibly adding the ec2-user to the apache group or ftp'ing as the apache user or some other combination that gives the ttpd server read access and the ssh user write access
I'm trying to use a Centos VPS as a place to host my mercurial repositories, I'll init these repositories on the server and then clone them to my local computer using TortoiseHG.
I've setup a clean VPS with Centos 6 and taken the usual security steps such as root login disabled, changed ports, ssh key access only and a new user user123.
I connect to the server using pageant and ssh keys generated using puttygen.
I've setup a virtualhost that's only accessible to my local machines ips that's located at /var/www/vhosts/hg/ all of my repositories are then accessible using http://123.123.123.123/repositoryname/
I've used yum install mercurial and can create repositories using hg init.
I can then succesfully clone my repositories to a local machine using the above url.
The problem
So far so good, however the issues arise when I try to push. At the moment I've not setup any sort of connection I'm just simply going on that I've been able to clone so I should be able to push (moronic).
However when I do try and push I get the following error from TortoiseHG:
abort: destination does not support push - command returned code 255
How do I go about adding support to be able to push to the above server configuration? Should I try and get it to use pageant or do I need additional server software so support pushing?
I'm not really sure of what the next step is and Googling hasn't yielded any success.
all of my repositories are then accessible using http://123.123.123.123/repositoryname/
Bad configuration... and probably totally wrong idea of using http, when you have ssh
Which http-frontend do you use?
Did you integrate (any) frontend with Mercurial?
Have you push enabled?
In case of ssh://-served repositories your task may be a lot simpler
Here's what I'm trying to do: set up a backup server on Google Compute Engine, where employees at my company can have their computers backup nightly via rdiffbackup. A cron job will run that runs rdiffbackup, which uses SSH to send files just like SCP.
With a "normal" server, I can create each employee a new user, and set permissions so they cannot read another employee's files.
It seems like using the "gcloud compute ssh" tool, or configuring regular ssh using "gcloud compute config-ssh", only allows you to allow users to connect who are added to the project and have connected their computer to their google account. My issue with this is that I don't see a way for a user to have read-write abilities on a server without also being a sudoer (anyone added to a project with "Can Edit" can get sudo as far as I know). Obviously if they have sudo, they can read others' files.
Can I give someone the ability to SSH remotely without having sudo? Thank you.
I recommend avoiding gcloud all together for this. gcloud's SSH tools are geared towards easily administering a constantly changing set of machines in your project. It is not made to cover all use cases that would also use SSH.
Instead, I recommend you setup your backup service as you would a normal server:
assign a static address
(optional) assign a dns name
setup users on the box using adduser
You have couple of options
1) You can manage non-root users on your instances as you would on any normal Linux machine with by manually adding them with the standard commands like 'adduser' and not gsutil/UI/metadata update path.
2) Alternatively if you need to manage a large cluster of machines you can disable the entire ACL management provided by Google and run your own LDAP server for this. The file which is responsible for the account updates and needs to be disabled to run is this one
https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google-daemon/etc/init/google-accounts-manager-service.conf
3) Finally you can lock down write access to the root users ie. disable writes propagating from metadata server by setting the immutable flag on the sudoers file 'chattr +i /etc/sudoers' Its not a graceful solution but effective. This way you lock in Root for the already added users and any new users will be added as non-root privileged, any new root level user needs to be added manually machine by machine though.
I have an openshift account with dokuwiki in one app php 5.3 cartige, I do backups using rhc snapshot save every day, today I try to do a restore it with rhc snapshot restore, but it looks like data is from the last git push what I did and the changes which I did into the dokuwiki aren´t into the restored "snapshot".
am I doing something wrong? ,
Rhc command help displays snapshot saves "the state of the application", doesn´t it mean what I expect (save the whole state of application)?
Thanks :)
OpenShift offers functionality to backup and restore with the snapshot command within the rhc client tools.
To backup your application code, data, logs and configuration, you run:
rhc snapshot save -a {appName}
To restore your application, you run:
rhc snapshot restore -a {appName} -f {/path/to/snapshot/appName.tar.gz}
When you do an rhc snapshot save, it saves what is in your git repository, what is in your app-root/data, and what is in any databases that you have running. So if you have ssh'd or sftp'd into your application and made changes, or used a web editor to make physical file changes (ones not stored in a database), then those changes will not be reflected in the backup/restore procedure.
My team has been using mercurial for a while. We use ssh to connect to a central remote repository. Haven't had any issues with pushing or pulling over ssh to remote repos... until today!
Everyone else is on the LAN, i work remotely and connect to the LAN with vpn (cisco). No one else is having problems now, but suddenly, no matter what I try, I get "no suitable response from remote hg!"
I am able to access everything else on the LAN, and I can even ssh (in a terminal) into the remote server holding the remote repositories.
Here is the output using the debug command
sending hello command
sending between command
no suitable response from remote hg
so, turns out my rsa key mysteriously disappeared from the list in PageAnt, and once I added it back in, everything works properly. strange, but resolved