Sending files to smb server - samba

I've got a server with an smb address, smb://files.cluster.ins.localnet/
Is it possible to send there files (fast) via the command line in a way similar to scp or rsync?
For example,
scp_to_samba folder_to_copy smb://files.cluster.ins.localnet/copied_content_folder/

I haven't found a way to get either rsync or scp to play nice with Samba servers. Try using using smbclient -c as described in this answer:
smbclient //files.cluster.ins.localnet -c 'prompt OFF; recurse ON; lcd folder_to_copy; mkdir copied_content_folder; cd copied_content_folder; mput *'
If you're planning on communicating with the same server frequently and want something command-like, you could try something like this bash 'script':
#!/bin/bash
# scp_to_samba.sh
smbclient //files.cluster.ins.localnet -W domain -U username \
-c "prompt OFF; recurse ON; lcd $1; mkdir $2; cd $2; mput *"
where domain and username are whatever credentials you need to log on to your server. Usage would then be:
./scp_to_samba.sh folder_to_copy copied_content_folder
To copy back from the server, you'd need to switch a few things in that command/script and use mget instead of mput.
Is this 'fast'? I don't know. But it is pretty straightforward and has worked pretty well for me so far.
See the smbclient man page for more details.

Related

Shell script to check if mysql is up or down

I want a bash shell script that i can run using a cron job to check if mysql on a remote server is running. If it is, then do nothing, other start the server.
The cronjob will be checking the remote server for a live (or not) mysql every minute. I can write the cron job myself, but i need help with the shell script that checks if a remote mysql is up or down. The response after a check if up or down is not important. But the check is important.
You can use below script
#!/bin/bash
USER=root
PASS=root123
mysqladmin -h remote_server_ip -u$USER -p$PASS processlist ###user should have mysql permission on remote server. Ideally you should use different user than root.
if [ $? -eq 0 ]
then
echo "do nothing"
else
ssh remote_server_ip ###remote server linux root server password should be shared with this server.
service mysqld start
fi
The script in the selected answer works great, but requires that you have the MySQL client installed on the local host. I needed something similar for a Docker container and didn't want to install the MySQL client. This is what I came up with:
# check for a connection to the database server
#
check=$(wget -O - -T 2 "http://$MYSQL_HOST:$MYSQL_PORT" 2>&1 | grep -o mariadb)
while [ -z "$check" ]; do
# wait a moment
#
sleep 5s
# check again
#
check=$(wget -O - -T 2 "http://$MYSQL_HOST:$MYSQL_PORT" 2>&1 | grep -o mariadb)
done
This is a little different, in that it will loop until a database connection can be made. I am also using MariaDB instead of the stock MySQL database. You can change this by changing the grep -o mariadb to something else - I'm not sure what MySQL returns on a successful connection, so you'll have to play with it a bit.

how to execute init scripts from the command line using ssh

ssh -t user#server1 "ls -al /root/test/"
Above mentioned code works fine and displays all the contents of the test directory folder but this code fails
ssh -t user#server1 "/etc/init.d/mysql start"
It does not start the mysql server, I have login in to server and use the same command to start the mysql server
Can any one explain this behaviour ? what I am doing wrong bit puzzled :(
Do something like this:
ssh user#hostname "/etc/init.d/mysql start < /dev/null > /tmp/log 2>&1 &"
ssh needs to use stdin and stdout to interact. This will allow it to do that and redirect the output to somewhere useful.
I'm not sure about roots of this behaviour, probably because ssh allocates pseudo-terminal. However you can use workaround with sudo:
ssh -t user#server1 "sudo service mysql start"

Mysql Auto Backup on ubuntu server

After months of trying to get this to happen I found a shell script that will get the job done.
Heres the code I'm working with
#!/bin/bash
### MySQL Server Login Info ###
MUSER="root"
MPASS="MYSQL-ROOT-PASSWORD"
MHOST="localhost"
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
BAK="/backup/mysql"
GZIP="$(which gzip)"
### FTP SERVER Login info ###
FTPU="FTP-SERVER-USER-NAME"
FTPP="FTP-SERVER-PASSWORD"
FTPS="FTP-SERVER-IP-ADDRESS"
NOW=$(date +"%d-%m-%Y")
### See comments below ###
### [ ! -d $BAK ] && mkdir -p $BAK || /bin/rm -f $BAK/* ###
[ ! -d "$BAK" ] && mkdir -p "$BAK"
DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')"
for db in $DBS
do
FILE=$BAK/$db.$NOW-$(date +"%T").gz
$MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE
done
lftp -u $FTPU,$FTPP -e "mkdir /mysql/$NOW;cd /mysql/$NOW; mput /backup/mysql/*; quit" $FTPS
Everything is running great, however there are a few things I'd like to fix but am clueless when it comes to shell scripts. I'm not asking anyone to write it. Just some pointers. First of all the /backup/mysql directory on my server stacks the files everytime it backs up. Not to big of a deal. But after a year of nightly backups it might get a little full. So id like it to clear that directory after uploading. Also I don't want to overload my hosting service with files so id like it to clear the remote servers dir before uploading. Lastly I would like it to upload to a subdirectory on the remote server such as /mysql
Why reinvent the wheel? You can just use Debian's automysqlbackup package (should be available on Ubuntu as well).
As for cleaning old files the following command might be of help:
find /mysql -type f -mtime +16 -delete
Uploading to remote server can be done using scp(1) command;
To avoid password prompt read about SSH public key authentication
Take a look at Backup, it allows you to model your backup jobs using a Ruby DSL, very powerful.
Support multiple DBs and most popular online storages, and lots of cool features.

How to download logs from child gears

I have OpenShift Enterprise 2.0 running in a multi-node setup. I am running a simple JBoss scaled app (3 gears, so HAProxy and 2 JBoss gears). I have used a pre_start_jbossews script in .openshift/action_hooks to configure verbose GC logging (with just gc.log as the file name). However, I can't figure out how to get the gc.log files from the gears running JBoss.
[Interestingly enough, there is an empty gc.log file in the head/parent gear (running HAProxy). Looks like there is a java process started there, that might be a bug.]
I tried to run
rhc scp <appname> download . jbossews/gc.log --gears
hoping that it would be implemented like the ssh --gears option, but it just tells me 'invalid option'. So my question is, how can I actually download logs from child gears?
I don't think that you can use RHC directly to get what you want.
That may require an Request for Enhancement to be made to the RHC SCP command.
File that here: https://github.com/openshift/rhc/issues
However you can use the following to find all of your GEARS:
rhc app show APP_NAME --gears | awk '{print $5}' | tail -n +3
From this list you can list all the logs for each gear that are part of that application.
for url in $(rhc app show APP_NAME --gears | awk '{print $5}' | tail -n +3); do for dir in $(ssh $url "ls -R | grep -i log.*:"); do echo -n $url:${dir%?}; echo; done; done
With that you can us simple scp commands to get the files you need from all of the gears:
for file_dir in $(for url in $(rhc app show APP_NAME --gears | awk '{print $5}' | tail -n +3); do for dir in $(ssh $url "ls -R | grep -i log.*:"); do echo -n $url:${dir%?}; echo; done; done); do scp "$file_dir/*" .; done
If you need to download any files, you can use an SFTP client like FileZilla, so you can copy files from the server.
I know it's been a while since the original question was posted, but I just bumped into the same issue today and found that you can use the scp command directly if you know the gear SSH URL:
scp local_file user#gear_ssh:remote_file
to upload a file to the gear, or
scp user#gear_ssh:remote_file local_file
to download from the gear.

hg archive to Remote Directory

Is there any way to archive a Mercurial repository to a remote directory over SSH? For example, it would be nice if one could do the following:
hg archive ssh://user#example.com/path/to/archive
However, that does not appear to work. It instead creates a directory called ssh: in the current directory.
I made the following quick-and-dirty script that emulates the desired behavior by creating a temporary ZIP archive, copying it over SSH, and unzipping the destination directory. However, I would like to know if there is a better way.
if [[ $# != 1 ]]; then
echo "Usage: $0 [user#]hostname:remote_dir"
exit
fi
arg=$1
arg=${arg%/} # remove trailing slash
host=${arg%%:*}
remote_dir=${arg##*:}
# zip named to match lowest directory in $remote_dir
zip=${remote_dir##*/}.zip
# root of archive will match zip name
hg archive -t zip $zip
# make $remote_dir if it doesn't exist
ssh $host mkdir --parents $remote_dir
# copy zip over ssh into destination
scp $zip $host:$remote_dir
# unzip into containing directory (will prompt for overwrite)
ssh $host unzip $remote_dir/$zip -d $remote_dir/..
# clean up zips
ssh $host rm $remote_dir/$zip
rm $zip
Edit: clone-and-push would be ideal, but unfortunately the remote server does not have Mercurial installed.
Nope, this is not possible -- we always assume that there is a functioning Mercurial installation on the remote host.
I definitely agree with you that this functionality would be nice, but I think it would have to be made in an extension. Mercurial is not a general SCP/FTP/rsync file-copying program, so don't expect to see this functionality in the core.
This reminds me... perhaps you can built on the FTP extension to make it do what you want. Good luck! :-)
Have you considered simply having a clone on the remote and doing hg push to archive?
Could you use a ssh tunnel to mount a remote directory on your local machine and then just do standard hg clone and hg push operations 'locally' (as far as HG knows) but where they actually write to a filesystem which is on the remote computer?
It looks like there are several stackoverflow questions about doing this:
How do I mount a remote Linux folder in Windows through SSH?
Map SSH drive in Windows
How can I mount a remote directory on my computer?
I am often in a similar situation. The way I get around it is with sshfs.
sshfs me#somewhere-else:path/to/repo local/path/to/somewhere-else
hg archive local/path/to/somewhere-else
fusermount -r somewhere-else
The only disadvantage is sshfs is slower than nfs, samba or rsync. Generally I don't notice as I only rarely need to do anything in the remote file-system.
You could also simply execute hg on the remote host:
ssh user#example.com "cd /path/to/repo; hg archive -r 123 /path/to/archive"