I'm a new fish for hadoop.I installed Ubuntu 12.10 on my computer and I wanna install Hadoop in pseudo-distributed mode on one single node.I searched and get lots of tutorials but I have a problem with the SSH.I did what the tutorial said.
I am sure the problem is about the SSH.I get the openssh-server,and had done this:
hadoop00#WebsoftStation:~$ssh-keygen -t dsa -P "" -f ~/.ssh/id_dsa
hadoop00#WebsoftStation:~/.ssh$cat ~/.ssh/id_dsa.pub >> authorized_keys
Then I can successfully ssh my localhost like this:
hadoop00#WebsoftStation:~$ssh localhost
It worked.
So I changed the path to hadoop and then:
hadoop00#WebsoftStation:/usr/local/hadoop$ sudo bin/start-all.sh
[sudo] password for hadoop00:
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-namenode-WebsoftStation.out
root#localhost's password:
root#localhost's password: localhost: Permission denied, please try again.
So,what's the problem?
You have setup password-less ssh for only your current account. Since, when you can use ssh localhost without any problem, the thing you need to do next is giving execution permission to your scripts.
Execute the following commands:
chmod +x bin/*.sh ---> assigns execution permission to all the scripts
./start.all ----> executes the script
Note: Hadoop can also be run without having password-less ssh setup using hadoop-daemon.sh script. The only advantage with password-less ssh is that, the ./start.all, script will take the trouble of doing that on behalf of you in each of the nodes.
You need to change permissions for your Hadoop folder to be owned by the hadoop00 user:
cd /usr/local/
sudo chown -R hadoop00:hadoop00 /usr/local/hadoop
Then you can cd into the sbin folder and run things without sudo. If you use sudo you're running the scripts as root which has different environment variables etc which is why you have a different behavior.
Why are you using sudo this is clearly a permission problem.
Try running this without sudo
bin/start-all.sh
Related
I am runnin OSX 10.9.5 and while trying to reset my MySQL root pasword I typed this:
sudo mysqld_safe --skip-grant-tables
After being asked for the admin password, I got this error :
sudo: mysqld_safe: command not found
I wrote this in
cd /usr/local/mysql
Also, I have a problem with the sudo command, event though I am logged on the admin account my account, It gives me often permission denied, like using this command for basically the same problem ( reseting my root password )
sudo kill cat /usr/local/mysql/data/rodongi.pid
I then got
cat: /usr/local/mysql/data/rodongi.pid: Permission denied
Password:
After entering the password …
usage: kill [-s signal_name] pid ...
kill -l [exit_status]
kill -signal_name pid ...
kill -signal_number pid ...
I have no idea why
1) I dont have the permission even though I used the sudo command( and another time sudo!! )
2) Why msql-bash doesn't not recognise the mysql and mysqld command ( I also tried in terminal-bash;does not work either)
First problem
You're trying to execute the command mysqld_safe, so that command should be on the PATH where the terminal looks for commands. (You can view these locations by running echo $PATH. The different locations are separated with a colon).
Since you're trying to run a file that is in the local directory you should type ./mysqld_safe to tell the shell that you're giving a path to file, otherwise it'll search for it in the PATH. (You can run the file from anywhere by specifying the full path).
Another solution is to make a symbolic link in /usr/local/bin/ that points to /usr/local/mysql/mysqld_safe` (which is the path to the command if I understood you correctly). That way you can run the command from anywhere because it's in the path the shell is looking for.
Second Problem
The cat command surrounded by backticks is executed by the shell before running the sudo command (If the file was readable for everyone the shell will execute something like: sudo kill 12345).
To run the cat as root you should run this command:
sudo bash -c 'kill `cat /usr/local/mysql/data/rodongi.pid`'
That way, you run bash as root, which in turn runs the kill command, and thus reads the rodongi.pid file as root.
I am trying to mount a remote filesystem on Google Container Engine. I am following this tutorial: https://www.digitalocean.com/community/tutorials/how-to-use-sshfs-to-mount-remote-file-systems-over-ssh
Using following sshfs command:
sudo sshfs -o sshfs_debug,allow_other <instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce-container
I am getting error:
SSHFS version 2.5
read: Connection reset by peer
I referred this link https://cloud.google.com/sdk/gcloud/reference/compute/config-ssh
and could login using ssh via following command:
$gcloud compute config-ssh
$ssh <instance-name>.<region>.<project_id>
Any ideas what might be going wrong here? I can't understand what keys and username should I use for sshfs login.
Update(11/5):
I am using following command:
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce`
I have chowned /mnt/gce folder for my user. I checked the IP matches the entry in ~/.ssh/config file. However I still get the error read: Connection reset by peer
The problem with command below is that
1) unless you have a static IP, it keeps changing on machine reboot
2) you need to use .pub file
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce
I finally got it working by following command:
sudo mkdir /mnt/gce
sudo chown <user> /mnt/gce
sshfs -o IdentityFile=~/.ssh/google_compute_engine.pub <user_name>#<instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce
A few things that might be the cause of the problem:
Don't use sshfs as root. It's a FUSE filesystem and meant to be user mounted.
Don't specify a full path as the remote FS. It's SSH, so by default, the $PWD on the remote side is the login user's $HOME.
if ssh works, sshfs will work. The easiest way is to make sure that ~/.ssh/config has an entry for the remote host with the user, port, etc provided.
If you get this from sshfs
read: Connection reset by peer
maybe help to set file to read only
chmod 400 /{{path_to_your_key}}/keypair.pem
and connect again.
I am learning docker these days. And I want to install mysql inside docker container.
Here is my Dockerfile
FROM ubuntu:14.04
ADD ./setup_mysql.sh /setup_mysql.sh
RUN chmod 755 /setup_mysql.sh
RUN /setup_mysql.sh
EXPOSE 3306
CMD ["/usr/sbin/mysqld"]
and shell script setup_mysql.sh
apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y mysql-server
sed -i -e "s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
service mysql start &
sleep 5
echo "UPDATE mysql.user SET password=PASSWORD('rootpass') WHERE user='root'" | mysql
echo "CREATE DATABASE devdb" | mysql
echo "GRANT ALL ON devdb.* TO devuser #'%' IDENTIFIED BY 'devpass'" | mysql
sleep 5
service mysql stop
Something wrong happend when running sudo docker build -t test/devenv .
Setting up mysql-server-5.5 (5.5.38-0ubuntu0.14.04.1) ...
invoke-rc.d: policy-rc.d denied execution of stop.
invoke-rc.d: policy-rc.d denied execution of start.
And if I remove the second sleep 5, the command service mysql stop will throw
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
Why does this happen?
Thank you!
I high recommend leveraging the work of others. For example checkout the Mysql image from the docker registry:
https://registry.hub.docker.com/_/mysql/
Here's the associated git repository files:
https://github.com/docker-library/mysql/blob/master/5.7
If you look into the Dockerfile you'll notice the software is being installed as expected:
.. apt-get update && apt-get install -y mysql-server="${MYSQL_VERSION}"* ..
The trick is to realize that a database instance is not the same thing as the database software, only the latter is shipped with the image. Creating DBs and loading them with data is something that is done at run-time. So that work is done by an extra script, pulled into the image and setup to be executed when you run the container:
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Hope this helps.
Add this to your dockerfile:
RUN su
RUN echo exit 0 > /usr/sbin/policy-rc.d
I was facing the same issue. This code fixed it.
Here is a good post which tries to root cause the issue you are facing.
Shorter way:
RUN echo "#!/bin/sh\nexit 0" > /usr/sbin/policy-rc.d should resolve your issue
OR
If that doesn't resolve the issue, try running your docker container with privileged option. Like this, docker run --privileged -d -ti DOCKER_IMAGE:TAG
Ideally, I would not recommend running container with privileged option unless its a test bed container. The reason being running a docker container with privileged gives all capabilities to the container, and it also lifts all the limitations enforced. In other words, the container can then do almost everything that the host can do. But this is not a good practice. This defeats the docker purpose of isolating from host machine.
The ideal way to do this is to set capabilities of your docker container based on what you want to achieve. Googling this should help you out to provide appropriate capability for your docker container.
I am using SunOS 5.10 Generic_147441-24 i86pc i386 i86pc
if i run
which sudo
i get the below
/opt/sfw/bin
when i run "sudo -l" i get the below
User localuser may run the following commands on this host:
(root) NOPASSWD: /sbin/ifconfig
for "visudo"
visudo
-bash: visudo: command not found
also /etc/sudoers file does not exist in the box.
Please help me configure sudo, how it is possible with out the sudoers file.
Perhaps you should have a look at Sun (Oracle) RBAC for accounts, rather than rely on sudo in Solaris? It is unclear from your post why you must use sudo, but if you are not calling sudo from a script, it might be worth your while to read: http://docs.oracle.com/cd/E23824_01/html/821-1456/rbac-1.html
I've never seen the sudo binary exist in /opt, so my first thought would be that your visudo binary is not in your path, or the sudo package you installed does not contain the visudo binary. Either way you may consider downloading the sudo package again and reinstalling.
To see if your visudo binary exists anywhere:
find / -name visudo -print
If you find nothing, remember you do not explicitly need visudo to use sudo -- it's there as a checkpoint for making sure that you do not save and exit a sudoers file that has errors, thus possibly compromising your ability to edit it again or to break sudo for all users on the host.
Also note that /etc/sudoers can start off empty, just fill it in with your sudo rules. For example, to provide sudo all commands on that host for a user without prompting for a password:
userid ALL=(ALL) NOPASSWD: ALL
That particular user ID can run "sudo -l" to list the sudo rules available to it. You could do this even just to test that sudo is in fact working on your host.
You could easily get the location of the sudoers file from sudo binary itself by doing this
cat $(which sudo) | strings | grep /sudoers
Then, you would know what file to modify.
I'm using OpenShift Origin and developing a cartridge for the first time. When my bin/install and bin/control scripts are running I've noticed "Permission denied" errors when they try to access anything in the cartridge usr dir. In the node platform.log I see the offending command that OpenShift runs looks like this (where my bin/control start tries to run a script in usr):
/sbin/runuser -s /bin/sh 5351e627ee5a934f290001d2 -c "exec /usr/bin/runcon 'unconfined_u:system_r:openshift_t:s0:c0,c1004' /bin/sh -c \"set -e; /var/lib/openshift/5351e627ee5a934f290001d2/mycart/bin/control start \""
Since the usr dir is a symlink I originally thought it was related to that, but now I think it's related to selinux (which I don't know much about). If I do a "ls -Z" on my app's cartridge dir the files are "system_u:object_r:openshift_var_lib_t:s0:c0,c1004" but the contents of the usr dir are "unconfined_u:object_r:default_t:s0", so it doesn't match what's in the above command.
I used the oo-admin-cartridge command to install the cartridge to my Origin VM.
Any ideas on how to fix this?
What I ended up doing was running "chcon -R -u system_u -t bin_t usr/" before installing the cartridge with oo-admin-cartridge. Built-in cartridges are not affected by this problem (checked nodejs), so I feel like it might be a oo-admin-cartridge bug. I would expect it to massage the selinux permissions instead of using whatever I provide.