Unable to mount a directory on Google Compute Engine using sshfs - google-compute-engine

I am trying to mount a remote filesystem on Google Container Engine. I am following this tutorial: https://www.digitalocean.com/community/tutorials/how-to-use-sshfs-to-mount-remote-file-systems-over-ssh
Using following sshfs command:
sudo sshfs -o sshfs_debug,allow_other <instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce-container
I am getting error:
SSHFS version 2.5
read: Connection reset by peer
I referred this link https://cloud.google.com/sdk/gcloud/reference/compute/config-ssh
and could login using ssh via following command:
$gcloud compute config-ssh
$ssh <instance-name>.<region>.<project_id>
Any ideas what might be going wrong here? I can't understand what keys and username should I use for sshfs login.
Update(11/5):
I am using following command:
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce`
I have chowned /mnt/gce folder for my user. I checked the IP matches the entry in ~/.ssh/config file. However I still get the error read: Connection reset by peer

The problem with command below is that
1) unless you have a static IP, it keeps changing on machine reboot
2) you need to use .pub file
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce
I finally got it working by following command:
sudo mkdir /mnt/gce
sudo chown <user> /mnt/gce
sshfs -o IdentityFile=~/.ssh/google_compute_engine.pub <user_name>#<instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce

A few things that might be the cause of the problem:
Don't use sshfs as root. It's a FUSE filesystem and meant to be user mounted.
Don't specify a full path as the remote FS. It's SSH, so by default, the $PWD on the remote side is the login user's $HOME.
if ssh works, sshfs will work. The easiest way is to make sure that ~/.ssh/config has an entry for the remote host with the user, port, etc provided.

If you get this from sshfs
read: Connection reset by peer
maybe help to set file to read only
chmod 400 /{{path_to_your_key}}/keypair.pem
and connect again.

Related

"Start by creating and opening a systemd socket file for Gunicorn with sudo privileges" (directory to this file does not *appear* to exist)

I am working on a server running ubuntu 18.04. This digital ocean tutorial on django deployment(https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04) is telling me to do the following:
"We’re now finished configuring our Django application. We can back out of our virtual environment by typing:
(env): deactivate" I am familiar with virtual environments, I did this. Now for the part I am not at all familiar with:
"Start by creating and opening a systemd socket file for Gunicorn with sudo privileges:
sudo nano /etc/systemd/system/gunicorn.socket
"
First, since I just deactivated my env, I am now at justin#ubuntu-s-1vcpu-1gb-nyc3-01:~$. If I ls I only see the project folder I created which holds the virtualenv, the python project, manage.py and the static directory. Nowhere can I find this
/etc/systemd/system/
directory and the command they are telling me to use cannot create directories, only files. So I am very confused, any help would be greatly appreciated.
/etc doesn't live inside ~. Try ls /etc to see what's already in that directory. If you need to create that directory, you can do so wih sudo mkdir -p /etc/systemd/system/ (the -p flag is to make sure that, in case systemd is also not present under etc, it will get created).

I am using Ubuntu, XAMPP, MySQL, and Geany. Trouble using fopen();

So when I try to use:
fopen("sometext.txt", "w") or die("blahblahbla");
I keep on getting the following message:
failed to open stream: Permission denied". I have looked for other
answers on this site and none of them actually work.
Why is this doing this? Can somebody recommend a fix?
Do I have permission to create files in my directory? I get a bunch of advice on using chmod or changing the "file access", but how do you do this? They never explain that, just "oh use this or that".
If you have Terminal Access just fire a command in file's folder:
sudo chmod 777 sometext.txt ( For security reasons, later use correct chmod for permission)
if you dont have, you can modify File Attributes in your FTP client. ( Tick all fields (Execute-read-write) for Owner, Group, Everyone).
I hope it will solve your problem.
First, make sure you are in the apache group (check it with id username), then add your user to group apache (sudo usermod -G apache -a username) and then make sure the directory is in the group apache (check it with ls -l directory. I suppose the directory is /var/www/html or /srv/whatever, but XAMPP has its own. If not, do a sudo chgrp apache directory. Also, the directory must be writable by group members (chmod g+w directory).
Obviously in the apache configuration must be the apache user and group. If they doesn't exist, create them (sudo groupadd apache and sudo useradd apache).
P.S: chmod 777 is evil! It's better to be in the apache group and avoid making your file be edited by someone else!

Solaris 10 sudo configuration Issue

I am using SunOS 5.10 Generic_147441-24 i86pc i386 i86pc
if i run
which sudo
i get the below
/opt/sfw/bin
when i run "sudo -l" i get the below
User localuser may run the following commands on this host:
(root) NOPASSWD: /sbin/ifconfig
for "visudo"
visudo
-bash: visudo: command not found
also /etc/sudoers file does not exist in the box.
Please help me configure sudo, how it is possible with out the sudoers file.
Perhaps you should have a look at Sun (Oracle) RBAC for accounts, rather than rely on sudo in Solaris? It is unclear from your post why you must use sudo, but if you are not calling sudo from a script, it might be worth your while to read: http://docs.oracle.com/cd/E23824_01/html/821-1456/rbac-1.html
I've never seen the sudo binary exist in /opt, so my first thought would be that your visudo binary is not in your path, or the sudo package you installed does not contain the visudo binary. Either way you may consider downloading the sudo package again and reinstalling.
To see if your visudo binary exists anywhere:
find / -name visudo -print
If you find nothing, remember you do not explicitly need visudo to use sudo -- it's there as a checkpoint for making sure that you do not save and exit a sudoers file that has errors, thus possibly compromising your ability to edit it again or to break sudo for all users on the host.
Also note that /etc/sudoers can start off empty, just fill it in with your sudo rules. For example, to provide sudo all commands on that host for a user without prompting for a password:
userid ALL=(ALL) NOPASSWD: ALL
That particular user ID can run "sudo -l" to list the sudo rules available to it. You could do this even just to test that sudo is in fact working on your host.
You could easily get the location of the sudoers file from sudo binary itself by doing this
cat $(which sudo) | strings | grep /sudoers
Then, you would know what file to modify.

Permission denied errors when creating app with custom OpenShift cartridge

I'm using OpenShift Origin and developing a cartridge for the first time. When my bin/install and bin/control scripts are running I've noticed "Permission denied" errors when they try to access anything in the cartridge usr dir. In the node platform.log I see the offending command that OpenShift runs looks like this (where my bin/control start tries to run a script in usr):
/sbin/runuser -s /bin/sh 5351e627ee5a934f290001d2 -c "exec /usr/bin/runcon 'unconfined_u:system_r:openshift_t:s0:c0,c1004' /bin/sh -c \"set -e; /var/lib/openshift/5351e627ee5a934f290001d2/mycart/bin/control start \""
Since the usr dir is a symlink I originally thought it was related to that, but now I think it's related to selinux (which I don't know much about). If I do a "ls -Z" on my app's cartridge dir the files are "system_u:object_r:openshift_var_lib_t:s0:c0,c1004" but the contents of the usr dir are "unconfined_u:object_r:default_t:s0", so it doesn't match what's in the above command.
I used the oo-admin-cartridge command to install the cartridge to my Origin VM.
Any ideas on how to fix this?
What I ended up doing was running "chcon -R -u system_u -t bin_t usr/" before installing the cartridge with oo-admin-cartridge. Built-in cartridges are not affected by this problem (checked nodejs), so I feel like it might be a oo-admin-cartridge bug. I would expect it to massage the selinux permissions instead of using whatever I provide.

Something goes wrong with the SSH while setting up hadoop

I'm a new fish for hadoop.I installed Ubuntu 12.10 on my computer and I wanna install Hadoop in pseudo-distributed mode on one single node.I searched and get lots of tutorials but I have a problem with the SSH.I did what the tutorial said.
I am sure the problem is about the SSH.I get the openssh-server,and had done this:
hadoop00#WebsoftStation:~$ssh-keygen -t dsa -P "" -f ~/.ssh/id_dsa
hadoop00#WebsoftStation:~/.ssh$cat ~/.ssh/id_dsa.pub >> authorized_keys
Then I can successfully ssh my localhost like this:
hadoop00#WebsoftStation:~$ssh localhost
It worked.
So I changed the path to hadoop and then:
hadoop00#WebsoftStation:/usr/local/hadoop$ sudo bin/start-all.sh
[sudo] password for hadoop00:
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-namenode-WebsoftStation.out
root#localhost's password:
root#localhost's password: localhost: Permission denied, please try again.
So,what's the problem?
You have setup password-less ssh for only your current account. Since, when you can use ssh localhost without any problem, the thing you need to do next is giving execution permission to your scripts.
Execute the following commands:
chmod +x bin/*.sh ---> assigns execution permission to all the scripts
./start.all ----> executes the script
Note: Hadoop can also be run without having password-less ssh setup using hadoop-daemon.sh script. The only advantage with password-less ssh is that, the ./start.all, script will take the trouble of doing that on behalf of you in each of the nodes.
You need to change permissions for your Hadoop folder to be owned by the hadoop00 user:
cd /usr/local/
sudo chown -R hadoop00:hadoop00 /usr/local/hadoop
Then you can cd into the sbin folder and run things without sudo. If you use sudo you're running the scripts as root which has different environment variables etc which is why you have a different behavior.
Why are you using sudo this is clearly a permission problem.
Try running this without sudo
bin/start-all.sh