ssh -t user#server1 "ls -al /root/test/"
Above mentioned code works fine and displays all the contents of the test directory folder but this code fails
ssh -t user#server1 "/etc/init.d/mysql start"
It does not start the mysql server, I have login in to server and use the same command to start the mysql server
Can any one explain this behaviour ? what I am doing wrong bit puzzled :(
Do something like this:
ssh user#hostname "/etc/init.d/mysql start < /dev/null > /tmp/log 2>&1 &"
ssh needs to use stdin and stdout to interact. This will allow it to do that and redirect the output to somewhere useful.
I'm not sure about roots of this behaviour, probably because ssh allocates pseudo-terminal. However you can use workaround with sudo:
ssh -t user#server1 "sudo service mysql start"
Related
I basically know nothing about docker. And not that much more about bash neither. So:
There's a command in the README of a Laravel project i'm working on, that shows how to fill some data on local MySQL docker image, by sending a queries from a file located in the HOST.
docker exec -i {image} mysql -uroot -p{password} {database} < location/of/file.sql
What i want to do is "hide" the password from README, and make it read from .env file
So, i want to do something like this:
docker exec --env-file=.env -i {image} mysql -uroot -p$DB_PASSWORD {database} < location/of/file.sql
I've tested that docker ... printenv does show the variables from the file. But echoing one of then outputs a blank line: docker ... echo $DB_PASSWORD and running MySQL command using it, gets me "Access denied for user 'root'#'localhost'"
I've tried run the MySQL command "directly": docker ... mysql ... < file.sql and also "indirectly": docker bash -c "mysql ..." < file.sql.
You should prevent your shell from expanding the local variables (by single-quoting, or by escaping $)
This should be passed to containers shell and expanded there:
docker exec --env-file=.env -i {image} bash -c 'mysql -uroot -p$DB_PASSWORD {database}' < location/of/file.sql
It could possibly be two cases.
Check the key name in your env file and the docker run command
Check the path of the env file you are mapping to.
I am runnin OSX 10.9.5 and while trying to reset my MySQL root pasword I typed this:
sudo mysqld_safe --skip-grant-tables
After being asked for the admin password, I got this error :
sudo: mysqld_safe: command not found
I wrote this in
cd /usr/local/mysql
Also, I have a problem with the sudo command, event though I am logged on the admin account my account, It gives me often permission denied, like using this command for basically the same problem ( reseting my root password )
sudo kill cat /usr/local/mysql/data/rodongi.pid
I then got
cat: /usr/local/mysql/data/rodongi.pid: Permission denied
Password:
After entering the password …
usage: kill [-s signal_name] pid ...
kill -l [exit_status]
kill -signal_name pid ...
kill -signal_number pid ...
I have no idea why
1) I dont have the permission even though I used the sudo command( and another time sudo!! )
2) Why msql-bash doesn't not recognise the mysql and mysqld command ( I also tried in terminal-bash;does not work either)
First problem
You're trying to execute the command mysqld_safe, so that command should be on the PATH where the terminal looks for commands. (You can view these locations by running echo $PATH. The different locations are separated with a colon).
Since you're trying to run a file that is in the local directory you should type ./mysqld_safe to tell the shell that you're giving a path to file, otherwise it'll search for it in the PATH. (You can run the file from anywhere by specifying the full path).
Another solution is to make a symbolic link in /usr/local/bin/ that points to /usr/local/mysql/mysqld_safe` (which is the path to the command if I understood you correctly). That way you can run the command from anywhere because it's in the path the shell is looking for.
Second Problem
The cat command surrounded by backticks is executed by the shell before running the sudo command (If the file was readable for everyone the shell will execute something like: sudo kill 12345).
To run the cat as root you should run this command:
sudo bash -c 'kill `cat /usr/local/mysql/data/rodongi.pid`'
That way, you run bash as root, which in turn runs the kill command, and thus reads the rodongi.pid file as root.
I want a bash shell script that i can run using a cron job to check if mysql on a remote server is running. If it is, then do nothing, other start the server.
The cronjob will be checking the remote server for a live (or not) mysql every minute. I can write the cron job myself, but i need help with the shell script that checks if a remote mysql is up or down. The response after a check if up or down is not important. But the check is important.
You can use below script
#!/bin/bash
USER=root
PASS=root123
mysqladmin -h remote_server_ip -u$USER -p$PASS processlist ###user should have mysql permission on remote server. Ideally you should use different user than root.
if [ $? -eq 0 ]
then
echo "do nothing"
else
ssh remote_server_ip ###remote server linux root server password should be shared with this server.
service mysqld start
fi
The script in the selected answer works great, but requires that you have the MySQL client installed on the local host. I needed something similar for a Docker container and didn't want to install the MySQL client. This is what I came up with:
# check for a connection to the database server
#
check=$(wget -O - -T 2 "http://$MYSQL_HOST:$MYSQL_PORT" 2>&1 | grep -o mariadb)
while [ -z "$check" ]; do
# wait a moment
#
sleep 5s
# check again
#
check=$(wget -O - -T 2 "http://$MYSQL_HOST:$MYSQL_PORT" 2>&1 | grep -o mariadb)
done
This is a little different, in that it will loop until a database connection can be made. I am also using MariaDB instead of the stock MySQL database. You can change this by changing the grep -o mariadb to something else - I'm not sure what MySQL returns on a successful connection, so you'll have to play with it a bit.
I'm using OpenShift Origin and developing a cartridge for the first time. When my bin/install and bin/control scripts are running I've noticed "Permission denied" errors when they try to access anything in the cartridge usr dir. In the node platform.log I see the offending command that OpenShift runs looks like this (where my bin/control start tries to run a script in usr):
/sbin/runuser -s /bin/sh 5351e627ee5a934f290001d2 -c "exec /usr/bin/runcon 'unconfined_u:system_r:openshift_t:s0:c0,c1004' /bin/sh -c \"set -e; /var/lib/openshift/5351e627ee5a934f290001d2/mycart/bin/control start \""
Since the usr dir is a symlink I originally thought it was related to that, but now I think it's related to selinux (which I don't know much about). If I do a "ls -Z" on my app's cartridge dir the files are "system_u:object_r:openshift_var_lib_t:s0:c0,c1004" but the contents of the usr dir are "unconfined_u:object_r:default_t:s0", so it doesn't match what's in the above command.
I used the oo-admin-cartridge command to install the cartridge to my Origin VM.
Any ideas on how to fix this?
What I ended up doing was running "chcon -R -u system_u -t bin_t usr/" before installing the cartridge with oo-admin-cartridge. Built-in cartridges are not affected by this problem (checked nodejs), so I feel like it might be a oo-admin-cartridge bug. I would expect it to massage the selinux permissions instead of using whatever I provide.
I'm a new fish for hadoop.I installed Ubuntu 12.10 on my computer and I wanna install Hadoop in pseudo-distributed mode on one single node.I searched and get lots of tutorials but I have a problem with the SSH.I did what the tutorial said.
I am sure the problem is about the SSH.I get the openssh-server,and had done this:
hadoop00#WebsoftStation:~$ssh-keygen -t dsa -P "" -f ~/.ssh/id_dsa
hadoop00#WebsoftStation:~/.ssh$cat ~/.ssh/id_dsa.pub >> authorized_keys
Then I can successfully ssh my localhost like this:
hadoop00#WebsoftStation:~$ssh localhost
It worked.
So I changed the path to hadoop and then:
hadoop00#WebsoftStation:/usr/local/hadoop$ sudo bin/start-all.sh
[sudo] password for hadoop00:
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-namenode-WebsoftStation.out
root#localhost's password:
root#localhost's password: localhost: Permission denied, please try again.
So,what's the problem?
You have setup password-less ssh for only your current account. Since, when you can use ssh localhost without any problem, the thing you need to do next is giving execution permission to your scripts.
Execute the following commands:
chmod +x bin/*.sh ---> assigns execution permission to all the scripts
./start.all ----> executes the script
Note: Hadoop can also be run without having password-less ssh setup using hadoop-daemon.sh script. The only advantage with password-less ssh is that, the ./start.all, script will take the trouble of doing that on behalf of you in each of the nodes.
You need to change permissions for your Hadoop folder to be owned by the hadoop00 user:
cd /usr/local/
sudo chown -R hadoop00:hadoop00 /usr/local/hadoop
Then you can cd into the sbin folder and run things without sudo. If you use sudo you're running the scripts as root which has different environment variables etc which is why you have a different behavior.
Why are you using sudo this is clearly a permission problem.
Try running this without sudo
bin/start-all.sh