Rsync ACLs over NFSv4 to EXT4 - acl

I'm trying to rsync with options -AX over an NFSv4 mount to an ext4 drive that has acl and user_xattr enabled. The comannd
rsync -aAX /data/ /mnt/back/data
results in
rsync: rsync_xal_set: lsetxattr(""/mnt/back/data/Users/user/Documents/Desktop"","security.NTACL") failed: Operation not supported (95)
along with all the other files.
Running the same command to a local folder works perfect so it must have something to do with NFS4 or EXT4 on the Server side.
My fstab mount
UUID=732683f0-e6ac-42d6-a492-e07643d7719c /media/back ext4 defaults,acl,user_xattr,barrier=1 0 0
My nfs exports file
/media/back 10.111.106.3(fsid=0,rw,async,no_root_squash,no_subtree_check)
My mount command
mount -t nfs4 -o proto=tcp,port=2049 10.111.106.12:/ /mnt/back
Version info:
Server
Ubuntu 14.04.2
Client
Ubuntu 14.04.2
rsync 3.1.0
Thanks for any suggestions!

This isn't a direct answer to the error I was receiving, but I worked around it by letting rsync do the networking via ssh instead of going over NFS. It turns out NFS doesn't support xattrs.

Related

How to deploy a war file into tomcat server installed in Google Compute Engine instance?

I have created a Linux VM instance in Google Compute Engine. I installed the JDK and Tomcat with the following commands using SSH.
sudo apt-get install default-jdk
sudo apt-get install tomcat8
I have a war file in my local machine. How can I move the war file from local machine to Compute Engine VM and run the war on the Tomcat server ?
I would recommend using Storage Buckets to store files. This way you can copy the same files to different VMs.
To use google storage from command line, first you would need to install gsutil.
To copy from your local machine to a bucket:
gsutil cp *.txt gs://my-bucket
To copy from a bucket to a VM, connect to your VM and run:
gsutil cp gs://my-bucket/*.txt .
More info about cp with gsutil on this link.

Unable to mount a directory on Google Compute Engine using sshfs

I am trying to mount a remote filesystem on Google Container Engine. I am following this tutorial: https://www.digitalocean.com/community/tutorials/how-to-use-sshfs-to-mount-remote-file-systems-over-ssh
Using following sshfs command:
sudo sshfs -o sshfs_debug,allow_other <instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce-container
I am getting error:
SSHFS version 2.5
read: Connection reset by peer
I referred this link https://cloud.google.com/sdk/gcloud/reference/compute/config-ssh
and could login using ssh via following command:
$gcloud compute config-ssh
$ssh <instance-name>.<region>.<project_id>
Any ideas what might be going wrong here? I can't understand what keys and username should I use for sshfs login.
Update(11/5):
I am using following command:
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce`
I have chowned /mnt/gce folder for my user. I checked the IP matches the entry in ~/.ssh/config file. However I still get the error read: Connection reset by peer
The problem with command below is that
1) unless you have a static IP, it keeps changing on machine reboot
2) you need to use .pub file
sshfs -o IdentityFile=~/.ssh/google_compute_engine <user>#<ip>:~/ /mnt/gce
I finally got it working by following command:
sudo mkdir /mnt/gce
sudo chown <user> /mnt/gce
sshfs -o IdentityFile=~/.ssh/google_compute_engine.pub <user_name>#<instance-name>.<region>.<project_id>:/home/<user_name> /mnt/gce
A few things that might be the cause of the problem:
Don't use sshfs as root. It's a FUSE filesystem and meant to be user mounted.
Don't specify a full path as the remote FS. It's SSH, so by default, the $PWD on the remote side is the login user's $HOME.
if ssh works, sshfs will work. The easiest way is to make sure that ~/.ssh/config has an entry for the remote host with the user, port, etc provided.
If you get this from sshfs
read: Connection reset by peer
maybe help to set file to read only
chmod 400 /{{path_to_your_key}}/keypair.pem
and connect again.

How to mount google disk to docker run -v

Is it is possible to use docker-machine with a google disk?
I have a docker-machine running via the docker-machine driver. I then need to be able to: docker run -v"path to google disk" From the terminal / docker-machine?
That's an interesting use case. There isn't a Volume Plugin to do that at the moment. But I may look into it (I just experimented with writing a Volume Plugin for Google Cloud Storage).
However, you should be able to mount the disk on the Docker Machine itself, and then reference it as you would with any other filesystem directories.
E.g.,
Attach a disk to the instance
Format and mount (e.g. mount to /mnt/mydisk)
Run docker run -ti -v /mnt/mydisk:/data busybox /bin/sh

AWS CentOS 6.5 Instance + AWS EBS volume for web hosting files and database?

I have an AWS instance running CentOS 6.5. It has been updated, secured, and setup for web hosting (LAMP). I attached an EBS volume to the instance and mounted it under /data.
Two questions:
How can I get MySQL to use the /data directory as its database storage location? (I don't want to run the program from the /data directory, just put the .sql file there.
How can I do the same for my web site? I plan on running a wordpress site and its current location is in the /var/www/html directory. I want to change this to /data/site.
I want to keep the web site files and database on a separate volume: /data. If my instance was to get corrupt or inaccessible, I can attach the EBS volume to a new instance.
I have read dozens of tutorials and articles on how to get MySQL moved to a different directory, but nothing is working. MySQL refuses to start up after. Can I keep MySQL installed as is, but have it read/write the database on a different directory like /data which is a mounted EBS volume or is this not possible at all with linux?
Here are some of the tutorials and articles I been following/testing with:
aws.amazon.com/articles/1663?_encoding=UTF8&jiveRedirect=1
spruce.it/noise/setting-up-a-proper-lamp-stack-on-aws-ec2-ebs/
EDIT:
This is what I am doing.
Create a new instance using this ami: https://aws.amazon.com/marketplace/pp/B00IOYDTV6?ref=cns_srchrow
Once the instance is up, I run updates using: sudo yum update -y
One updated, I set it up as a LAMP web server using these instructions: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html
In addition to the above steps, I allow port 80 tcp connections on the built-in firewall. I run these commands: sudo iptables -I INPUT -p tcp --dport 80 -j ACCEPT and sudo service iptables save
Once this is done, I test my site at http://IP-ADDRESS (this shows me the Apache Test Page)
Once LAMP is installed, I install the MySQL Server by running this: yum install mysql-server
After that is installed, I proceed to the "To secure the MySQL server" instructions on the previous Amazon document.
Next, I install PHPMyAdmin using these two tutorials: http://tecadmin.net/installing-apache-mysql-php-on-centos-redhat/# and http://tecadmin.net/how-to-install-phpmyadmin-on-centos-using-yum/
At this point, I have a fully functioning web server. Now, I want to use the AWS EBS volume to store all the databases and website files. First, I attach the newly create AWS EBS volume. I use this tutorial to do this: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html
THIS IS WHERE THE PROBLEMS START.
Using the information in this tutorial: aws.amazon.com/articles/1663?_encoding=UTF8&jiveRedirect=1. It says FAILED.
So one thing you can do is the following that avoids copying all directories. You need to make sure that all permissions are setup correctly for it to work:
mysql dat dir:
mv /var/lib/mysql /var/lib/mysql.orig
mkdir -p /<your-new-ebs-mountpoint>/var/lib/mysql
chown mysql.mysql /<your-new-ebs-mountpoint>/var/lib/mysql
chmod 700 /<your-new-ebs-mountpoint>/var/lib/mysql
etc configs:
mkdir -p /<your-new-ebs-mountpoint>/etc
cp /etc/my.cnf /<your-new-ebs-mountpoint>/etc/my.cnf
mv /etc/my.cnf /etc/my.cnf.orig
ln -s /<your-new-ebs-mountpoint>/etc/my.cnf /etc/my.cnf
logs:
mkdir -p /<your-new-ebs-mountpoint>/var/log
mv /var/log/mysqld.log /var/log/mysqld.log.orig
touch /<your-new-ebs-mountpoint>/var/log/mysqld.log
chown mysql.mysql /<your-new-ebs-mountpoint>/var/log/mysqld.log
chmod 640 /<your-new-ebs-mountpoint>/var/log/mysqld.log
ln -s /<your-new-ebs-mountpoint>/var/log/mysqld.log /var/log/mysqld.log

How to backup MySQL in rescue mode?

I have a Rackspace VPS running CentOS that I can only access in Read Only rescue mode. How can I backup/restore MySQL using SSH and FTP with no access to mysql command line tools?
The reason for this is that the image used to build the server has an issue with Nova so Rackspace are unable to build from it. What I need to do is transfer all files onto a clean new machine.
I can access all files without issue, but I would also like to recover any MySQL database that were on the machine. However, MySQL will not run in the rescue mode Rackspace offer and I can't use these tools to make any kind of dump - I have SSH and FTP only. Can anyone hint as to how I can rescue/transfer my MySQL databases to the new machine?
Setup a new VPS with an identical version of mysql and transfer (scp/rsync/sftp) the raw database files in /var/lib/mysql and the mysql conf file (typically /etc/my.cnf) to the new server. Make sure the permissions of these files don't change on the new server. This wouldn't work without a third party utility (Percona Xtrabackup for example) if mysqld was running but since you cannot run mysqld in r/o mode anyways this is your best bet.
Example path is miliardowo
My old server was debian. New one is Ubuntu 14.04 LTS
Copy file from /var/lib/mysql/miliardowo to your server
Add permission in /var/lib/mysql/
chmod 700 miliardowo/
chmod 660 miliardowo/*
chmod g-s miliardowo/
chmod g-s miliardowo/*
chmod u-s miliardowo/
chmod u-s miliardowo/*
chown mysql:mysql miliardowo/
chown mysql:mysql miliardowo/*
updatedb