Separating MySQL Data & Logs in Different Directories - mysql

I am not very experienced with databases so I hope my question makes sense.
I am setting up a MySQL/MariaDB instance for Nextcloud on my Ubuntu server and the data will be stored on a ZFS pool.
Several guides/blogs mentions that its best practice to separate data and log files to separate datasets with different properties. So in I add the following to my my.cnf configuration file so that logs and data are stored in different directories and not directly under /var/lib/mysql which is the default.
[mysqld]
datadir = /var/lib/mysql/data
innodb_log_group_home_dir = /var/lib/mysql/log
innodb_data_home_dir = /var/lib/mysql/data
slow_query_log_file = /var/lib/mysql/log/slow.log
log_error = /var/lib/mysql/log/error.log
aria-log-dir-path = /var/lib/mysql/log
In the /var/lib/mysql directory I see that the data/ and and log/ directory has been created and contains some files.
drwxr-xr-x 7 mysql mysql 9 Apr 18 08:03 ./
drwxr-xr-x 8 root root 8 Apr 6 00:10 ../
drwxr-xr-x 2 mysql mysql 5 Apr 18 08:03 data/
drwxr-xr-x 2 mysql mysql 6 Apr 18 08:03 log/
-rw-rw---- 1 mysql mysql 0 Apr 18 08:03 multi-master.info
drwx------ 2 mysql mysql 90 Apr 18 08:03 mysql/
-rw-r--r-- 1 mysql mysql 15 Apr 18 08:03 mysql_upgrade_info
drwx------ 2 mysql mysql 3 Apr 18 08:03 nextcloud/
drwx------ 2 mysql mysql 3 Apr 18 08:03 performance_schema/
However, there are still some additional files and folders (I guess one for each database, they only contain a db.opt file) in the /var/lib/mysql directory.
Are these files/folders not considered neither data nor logs and and if I were to backup/recreate the dababase(s) on a different system would I need to copy anything other than the data/ directory?

Related

Unable to remove MariaDB persistent storage with Docker Rootless

I am using Docker as Rootless. I dedicated the user gcadmin (UID 1001) to control the docker containers.
One of my container is running a mariaDB image.
I have configured a docker volume to get persistent storage in my docker-compose.yml file :
volumes:
...
./docker_volumes/mysql/data:/var/lib/mysql:rw
...
On first container startup, my data folder is created and populated using the user mysql (UID 999) :
// From within the container
root#f7e3b6722b0b:/var/lib/mysql# ls -la
drwxr-xr-x 6 mysql mysql 4096 Jul 25 15:44 .
drwxr-xr-x 1 root root 4096 Jun 7 02:44 ..
-rw-rw---- 1 mysql mysql 18309120 Jul 25 14:59 aria_log.00000001
-rw-rw---- 1 mysql mysql 52 Jul 25 14:55 aria_log_control
On host side, the files appears as owned by UID 166534 :
gcadmin#host:~$ ls -la docker_volumes/mysql/data/
drwxr-xr-x 6 166534 166534 4096 juil. 25 15:44 .
drwxrwxr-x 5 gcadmin gcadmin 4096 juil. 25 14:51 ..
-rw-rw---- 1 166534 166534 18309120 juil. 25 14:59 aria_log.00000001
-rw-rw---- 1 166534 166534 52 juil. 25 14:55 aria_log_control
...
I understand that the 166534 come from userspace remapping : 165536 + 999 - 1 = 166534
gcadmin#host:~$ cat /etc/subuid
gcadmin:165536:65536
...
Now I want "gcadmin" to be able to remove the docker_volumes/mysql/data/ folder to reset MariaDB to its initial state.
gcadmin#host:~$ rm -rf docker_volumes/mysql/data/
rm: Unable to remove 'docker_volumes/mysql/data/multi-master.info': Permission denied
...
Of course gcadmin doesn't own the files and I get a permission error.
In my use case, "gcadmin" can't have root privileges for security reasons.
How can I remove the peristent data folder of MariaDB without having root privileges on host side ?

how to start mariaDB on boot after external drive is mounted

I am using a raspberry pi 3 with OSMC as the operating system along with Debian Stretch and nginx, and installed manually mariaDB 10.2 following some instructions I found somewhere a while back.
I have changed the datadir for mariadb to /media/USBHDD2/shared/mysql
When I boot, or reboot, the pi, mariaDB fails to start. Before, when I had the default datadir = /var/lib/mysql it was all fine. If I change it back it is fine.
However, if I login to the console I can successfully start it by using
service mysql start
Note that I am using 'service' rather than 'systemctl' - the latter does not work. The files mariadb.service and mysql.service do not exist anywhere.
In /etc/init.d I find two files: mysql and myswql which seem to be identical. If I remove the myswql from the directory mariadb won't start at all. I have tried editing these by putting, for example, a sleep 15 at the beginning, but to no avail. I have read all sorts of solutions about trying to test if the USBHDD2 is mounted, eg using
while ! test -f /media/USBHDD2/shared/test.txt
do
sleep 1
done
which I tried in the /etc/init.d/mysql and myswql files, and also in rc.local before calling for the start of mysql.
But that doesn't work either.
I also renamed the links in rc?.d to S99mysql so is starts after everything else, still no joy.
I have spent two full days on this to no avail. What do I need to do to get this working so that mysql starts on boot?
Files system is ntfs
output from ls -la //media/USBHDD2/shared/mysql is as follows:
total 176481
drwxrwxrwx 1 root root 4096 Mar 27 11:41 .
drwxrwxrwx 1 root root 4096 Mar 27 13:06 ..
-rwxrwxrwx 1 root root 16384 Mar 27 11:41 aria_log.00000001
-rwxrwxrwx 1 root root 52 Mar 27 11:41 aria_log_control
-rwxrwxrwx 1 root root 0 Nov 3 2016 debian-10.1.flag
-rwxrwxrwx 1 root root 12697 Mar 27 11:41 ib_buffer_pool
-rwxrwxrwx 1 root root 50331648 Mar 27 11:41 ib_logfile0
-rwxrwxrwx 1 root root 50331648 Mar 26 22:02 ib_logfile1
-rwxrwxrwx 1 root root 79691776 Mar 27 11:41 ibdata1
drwxrwxrwx 1 root root 32768 Mar 25 18:37 montegov_admin
-rwxrwxrwx 1 root root 0 Nov 3 2016 multi-master.info
drwxrwxrwx 1 root root 20480 Sep 3 2019 mysql
drwxrwxrwx 1 root root 0 Sep 3 2019 performance_schema
drwxrwxrwx 1 root root 86016 Mar 25 20:06 rentmaxpro_wp187
drwxrwxrwx 1 root root 0 Sep 3 2019 test
drwxrwxrwx 1 root root 32768 Nov 3 2016 trustedhomerenta_admin
drwxrwxrwx 1 root root 32768 Nov 3 2016 trustedhomerenta_demo
drwxrwxrwx 1 root root 40960 Mar 25 21:05 trustedhomerenta_meta
drwxrwxrwx 1 root root 36864 Mar 25 21:25 trustedhomerenta_montego
drwxrwxrwx 1 root root 36864 Mar 26 20:37 trustedhomerenta_testmontego
The problem is that the external drive is configured as ntfs.
Mysql requires the files and directory to be owned by mysql:mysql but since the ntfs does not have the same system of owners and groups as linux, the linux mount process asigns its own owner and group to the filestructure when mounting the drive. By defualt this ends up being root:root so mysql cannot use them.
ntfs does not allow CHOWN to work, so there is no way to change the ownership away from root.
One solution is to backup all the files, repartition as EXT4, and then restore all the files.
The solution I finally used was to specify mysql as the owner and group at the time that the drive is being mounted. Thus my /etc/fstab file was changed to:
ID=C2CA68D9CA68CB6D /media/USBHDD2 ntfs users,exec,uid=mysql,gid=mysql 0 2
and now mysql starts properly at boot.
phew ;-)
Thanks #danblack for getting me thinking in the right direction

Unable to start SQL Node in Mysql-cluster

mysqld: [ERROR] Could not open required defaults file
I'm trying to configure MySQL-Cluster with the Auto-Installer. When I deploy and start the cluster I get the following message when starting the sql nodes
Error 1
So I checked the permissions of the files. ls -l looks like this:
drwxrwxr-x 3 mysql ubuntu 4096 Sep 10 03:00 1
drwxrwxr-x 3 mysql ubuntu 4096 Sep 10 03:00 2
drwxrwxr-x 2 mysql ubuntu 4096 Sep 10 03:30 49
drwxrwxr-x 5 mysql ubuntu 4096 Sep 10 03:03 53
And inside the 53 folder:
-rw-rw-r-- 1 mysql ubuntu 214 Sep 10 03:25 my.cnf
drwxrwxr-x 2 mysql ubuntu 4096 Sep 10 03:00 mysql
drwxrwxr-x 2 mysql ubuntu 4096 Sep 10 03:00 test
drwxrwxr-x 2 mysql ubuntu 4096 Sep 10 03:00 tmp
I have tried to run the command manually and I get the following message:
ubuntu#mysql-cluster-1:~/MySQL_Cluster/53$ !41
/usr/sbin/mysqld --defaults-file=/home/ubuntu/MySQL_Cluster/53/my.cnf
mysqld: [ERROR] Could not open required defaults file: /home/ubuntu/MySQL_Cluster/53/my.cnf
mysqld: [ERROR] Fatal error in defaults handling. Program aborted!
The file my.cnf contains this configuration:
#
# Configuration file for test1
# Generated by mcc
#
[mysqld]
log-error=mysqld.53.err
datadir="/home/ubuntu/MySQL_Cluster/53/data"
tmpdir="/home/ubuntu/MySQL_Cluster/53/tmp"
basedir="/usr/"
port=3306
ndbcluster=on
ndb-nodeid=53
ndb-connectstring=10.142.0.2:1186,
socket="/home/ubuntu/MySQL_Cluster/53/mysql.socket"
ndb-wait-setup=120
ndb-batch-size=32768
ndb-blob-read-batch-bytes=65536
ndb-blob-write-batch-bytes=65536
ndb-deferred-constraints=0
ndb-log-apply-status=0
ndb-log-empty-epochs=0
ndb-log-empty-update=0
ndb-log-exclusive-reads=0
Edit 1: I'm using Ubuntu 18.04.1 and MySQL Cluster 7.6.7 installed with the .deb files

Neo4j - Couldn't load the external resource at: file:

I am using ubuntu 14.04 and trying to import csv file but getting following error - Couldn't load the external resource at: file:/usr/share/neo4j/import/orders.csv
My query is -
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM "file:///orders.csv" AS row
MATCH (order:Order {orderId: row.SalesOrderID})
MATCH (product:Product {productId: row.ProductID})
MERGE (order)-[pu:PRODUCT]->(product)
ON CREATE SET pu.unitPrice = toFloat(row.UnitPrice), pu.quantity = toFloat(row.OrderQty);
I have placed csv files at /var/lib/neo4j/import and also changed permission sudo chmod 777 -R /var/lib/neo4j/import but still not working.
file permissions are as -
sachin#sachin:/var/lib/neo4j$ ls -la
total 28
drwxr-xr-x 7 neo4j adm 4096 Aug 31 10:10 .
drwxr-xr-x 76 root root 4096 Aug 30 19:33 ..
drwxr-xr-x 2 neo4j nogroup 4096 Aug 31 10:10 certificates
drwxr-xr-x 4 neo4j adm 4096 Aug 31 10:10 data
drwxrwxrwx 2 neo4j adm 4096 Aug 31 11:16 import
drwxr-xr-x 2 neo4j nogroup 4096 Aug 31 10:10 .oracle_jre_usage
drwxr-xr-x 2 neo4j adm 4096 Jul 28 09:19 plugins
Please help!!! Thanks.
okay , I've resolved it by creating new directory import under /usr/share/neo4j , place csv files in this directory and set its permission to 777.
Explanation Since my error was Couldn't load the external resource at: file:/usr/share/neo4j/import/orders.csv and I placed my csv files at /var/lib/neo4j/import. Hope it will help others,Thanks.

Docker: correct way of persisting container data to host

I'm using OSX and running docker over a Boot2docker VM.
I've been trying to figure out how to persist a container's data (MySQL official docker image) to the host but without much success.
I keep receiving an error stating that the /var/lib/mysql directory that the MySQL service is trying to write to is not accessible.
docker run -e MYSQL_ROOT_PASSWORD=12345 -v "$(pwd)/.docker-volumes/mysql:/var/lib/mysql" mysql:5.6
Looking at the permissions of the mounted library in the container, this is what I see:
root#mysql:/# ls -la /var/lib/
total 44
drwxr-xr-x 16 root root 4096 Jan 27 18:35 .
drwxr-xr-x 18 root root 4096 Jan 27 18:35 ..
drwxr-xr-x 7 root root 4096 Jan 27 18:35 apt
drwxr-xr-x 14 root root 4096 Jan 27 18:35 dpkg
drwxr-xr-x 2 root root 4096 Jul 14 2013 initscripts
drwxr-xr-x 2 root root 4096 Jul 14 2013 insserv
drwxrwsr-x 2 libuuid libuuid 4096 Dec 11 2012 libuuid
drwxr-xr-x 2 root root 4096 Dec 24 13:41 misc
drwxr-xr-x 1 1000 staff 102 Feb 4 15:10 mysql
drwxr-xr-x 2 root root 4096 Jan 27 16:48 pam
drwxr-xr-x 2 root root 4096 Nov 23 2012 update-rc.d
drwxr-xr-x 2 root root 4096 Jul 14 2013 urandom
As you can see, the mysql directory is owned by 1000 and belongs to the group 'staff'.
My assumption is that the service process running MySQL is probably set to another user (mysql) and therefore I get this error.
I've read that this specific issue can be solved using volume data containers, but since they persist the data only until the last container actually uses their volume, It's not a good solution for me.
Am I approaching this in the wrong way?
Thanks.
You're definitely better off using a data-volume container, I do the same thing with local psql and couchdb databases. The data actually persists, it's just not accessible unless you link the volume to a container. To actually force the volume to be removed you have to specify docker rm -v, that will remove the data volume if no other container is linked to it.