I don't know exactly what I did wrong, but it's likely some 'chown' operation that I did. I was trying to allow the user&group mysql:mysql access to a /media/usb drive, but may have inadvertently changed something else.
When I do sudo systemctl start mysql.service I get an error. Upon examining with sudo systemctl status mysqld, I get the following:
mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
Active: activating (start-post) (Result: exit-code) since Fri 2020-06-19 08:11:01 EDT; 19s ago
Process: 15459 ExecStart=/usr/sbin/mysqld (code=exited, status=203/EXEC)
Process: 15444 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
Main PID: 15459 (code=exited, status=203/EXEC); : 15460 (mysql-systemd-s)
Tasks: 2
Memory: 2.4M
CPU: 175ms
CGroup: /system.slice/mysql.service
└─control
├─15460 /bin/bash /usr/share/mysql/mysql-systemd-start post
└─15687 sleep 1
Jun 19 08:11:01 apil-dlrig systemd[1]: Starting MySQL Community Server...
Jun 19 08:11:01 apil-dlrig systemd[1]: mysql.service: Main process exited, code=exited, status=203/EXEC
When I check ownership on /var/lib/mysql, I get the following, which seems reasonable. I.e. user mysql has full ownership on this folder.
apil#apil-dlrig:~$ sudo ls -la /var/lib/mysql
total 176212
drwx------ 7 mysql mysql 4096 Jun 19 07:34 .
drwxr-xr-x 79 root root 4096 Oct 30 2019 ..
-rw-r----- 1 mysql mysql 56 Oct 20 2019 auto.cnf
-rw------- 1 mysql mysql 1680 Nov 22 2019 ca-key.pem
-rw-r--r-- 1 mysql mysql 1112 Nov 22 2019 ca.pem
-rw-r--r-- 1 mysql mysql 1112 Nov 22 2019 client-cert.pem
-rw------- 1 mysql mysql 1676 Nov 22 2019 client-key.pem
-rw-r--r-- 1 mysql mysql 0 May 5 06:38 debian-5.7.flag
drwxr-x--- 2 mysql mysql 4096 Jun 6 13:44 foo
-rw-r----- 1 mysql mysql 665 Jun 19 07:34 ib_buffer_pool
-rw-r----- 1 mysql mysql 79691776 Jun 19 07:34 ibdata1
-rw-r----- 1 mysql mysql 50331648 Jun 19 07:34 ib_logfile0
-rw-r----- 1 mysql mysql 50331648 Oct 20 2019 ib_logfile1
-rw-r----- 1 mysql mysql 155 Jun 16 07:23 keyring_backup
drwxr-x--- 2 mysql mysql 4096 May 5 06:38 mysql
-rw-r--r-- 1 mysql mysql 6 May 5 06:38 mysql_upgrade_info
drwxr-x--- 2 mysql mysql 4096 May 5 06:38 performance_schema
-rw------- 1 mysql mysql 1680 Nov 22 2019 private_key.pem
drwxr-x--- 2 mysql mysql 4096 Jun 16 07:25 prod
-rw-r--r-- 1 mysql mysql 452 Nov 22 2019 public_key.pem
-rw-r--r-- 1 mysql mysql 1112 Nov 22 2019 server-cert.pem
-rw------- 1 mysql mysql 1680 Nov 22 2019 server-key.pem
drwxr-x--- 2 mysql mysql 12288 Nov 22 2019 sys
The /etc/systemd/system/multi.user.wants.targets/mysql.service looks as the following. Nothing should've changed here, i.e. it is as default as MySQL comes.
# MySQL systemd service file
[Unit]
Description=MySQL Community Server
After=network.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
PermissionsStartOnly=true
ExecStartPre=/usr/share/mysql/mysql-systemd-start pre
ExecStart=/usr/sbin/mysqld
ExecStartPost=/usr/share/mysql/mysql-systemd-start post
TimeoutSec=600
Restart=on-failure
RuntimeDirectory=mysqld
RuntimeDirectoryMode=755
Wondering what could be going wrong. Any help would be appreciated.
Thanks
If you look at the original error, the issue is: ExecStart=/usr/sbin/mysqld (code=exited, status=203/EXEC). Looks like some kind of execution privilege for the mysqld file? Checked it too:
ls -la /usr/sbin/mysqld which returned
-rw-r--r-- 1 root root 24585896 Apr 30 10:52 /usr/sbin/mysqld
So the issue (I thought) was that user root didn't have the execute permission. Looks at the first three letters rw-. The last dash means no execution privilege.
So I simply ran the following chmod 777 /usr/sbin/mysqld, after which the ownership returns as
-rwxrwxrwx 1 root root 24585896 Apr 30 10:52 /usr/sbin/mysqld
Now, systemctl start mysql.service runs just fine.
It's amazing how simply the process of writing a question on stackoverflow actually helps me solve a problem 80% of the time. Thanks again, folks.
Related
I have a backup script that uses mysqldump to dump a Mediawiki database, then archives it with gzip. It seems to be working okay, but I am curious why the size of the archives appear to grow and shrink at random. It's not a very active site, so large amounts of data aren't being added or deleted on the daily.
-rw-r--r-- 1 root root 91M Mar 27 11:46 wiki_data_20220325.sql.gz
-rw-r--r-- 1 root root 93M Mar 27 11:46 wiki_data_20220326.sql.gz
-rw-r--r-- 1 root root 92M Mar 27 11:56 wiki_data_20220327.sql.gz
-rw-r--r-- 1 root root 110M Mar 28 03:15 wiki_data_20220328.sql.gz
-rw-r--r-- 1 root root 99M Mar 29 03:15 wiki_data_20220329.sql.gz
-rw-r--r-- 1 root root 103M Mar 30 03:15 wiki_data_20220330.sql.gz
-rw-r--r-- 1 root root 107M Mar 31 03:15 wiki_data_20220331.sql.gz
-rw-r--r-- 1 root root 78M Mar 27 11:47 wiki_html_20220320.tar.gz
-rw-r--r-- 1 root root 173M Mar 27 11:47 wiki_xml_20220321.xml
-rw-r--r-- 1 root root 173M Mar 27 11:47 wiki_xml_20220322.xml
-rw-r--r-- 1 root root 173M Mar 27 11:47 wiki_xml_20220323.xml
-rw-r--r-- 1 root root 173M Mar 27 11:47 wiki_xml_20220324.xml
The size difference persists after extracting the archives.
-rw-rw-r-- 1 user user 280M Mar 31 10:27 wiki0328.sql
-rw-r--r-- 1 user user 110M Mar 31 10:26 wiki0328.sql.gz
-rw-rw-r-- 1 user user 267M Mar 31 10:27 wiki0329.sql
-rw-r--r-- 1 user user 99M Mar 31 10:26 wiki0329.sql.gz
It's not necessarily a problem, but I am curious. Is this common / normal behavior for databases dumped from complex software like Mediawiki?
Here's the relevant chunk of the backup script, in case it matters...
echo "## Set ReadOnly on"
echo "\$wgReadOnly = 'Dumping Database, Access will be restored shortly';" >> $localSet
echo "## Dumping XML..."
php $dumpXML --full --quiet > $saveLoc/"wiki_xml_"$(date +%Y%m%d)".xml"
echo "## Dumping database..."
mysqldump my_wiki | gzip -f > $saveLoc/"wiki_data_"$(date +%Y%m%d)".sql.gz"
echo "## Set ReadOnly off"
tail -n 1 "$localSet" | wc -c | xargs -I {} truncate "$localSet" -s -{}
Thanks in advance for any info!
Summary of the comments above: the objectcache table in a Wordpress database varies in size, and this is normal. Therefore it will cause the database backup to vary in size. To minimize the size of the backup, some people omit the objectcache table from backups.
I am using a raspberry pi 3 with OSMC as the operating system along with Debian Stretch and nginx, and installed manually mariaDB 10.2 following some instructions I found somewhere a while back.
I have changed the datadir for mariadb to /media/USBHDD2/shared/mysql
When I boot, or reboot, the pi, mariaDB fails to start. Before, when I had the default datadir = /var/lib/mysql it was all fine. If I change it back it is fine.
However, if I login to the console I can successfully start it by using
service mysql start
Note that I am using 'service' rather than 'systemctl' - the latter does not work. The files mariadb.service and mysql.service do not exist anywhere.
In /etc/init.d I find two files: mysql and myswql which seem to be identical. If I remove the myswql from the directory mariadb won't start at all. I have tried editing these by putting, for example, a sleep 15 at the beginning, but to no avail. I have read all sorts of solutions about trying to test if the USBHDD2 is mounted, eg using
while ! test -f /media/USBHDD2/shared/test.txt
do
sleep 1
done
which I tried in the /etc/init.d/mysql and myswql files, and also in rc.local before calling for the start of mysql.
But that doesn't work either.
I also renamed the links in rc?.d to S99mysql so is starts after everything else, still no joy.
I have spent two full days on this to no avail. What do I need to do to get this working so that mysql starts on boot?
Files system is ntfs
output from ls -la //media/USBHDD2/shared/mysql is as follows:
total 176481
drwxrwxrwx 1 root root 4096 Mar 27 11:41 .
drwxrwxrwx 1 root root 4096 Mar 27 13:06 ..
-rwxrwxrwx 1 root root 16384 Mar 27 11:41 aria_log.00000001
-rwxrwxrwx 1 root root 52 Mar 27 11:41 aria_log_control
-rwxrwxrwx 1 root root 0 Nov 3 2016 debian-10.1.flag
-rwxrwxrwx 1 root root 12697 Mar 27 11:41 ib_buffer_pool
-rwxrwxrwx 1 root root 50331648 Mar 27 11:41 ib_logfile0
-rwxrwxrwx 1 root root 50331648 Mar 26 22:02 ib_logfile1
-rwxrwxrwx 1 root root 79691776 Mar 27 11:41 ibdata1
drwxrwxrwx 1 root root 32768 Mar 25 18:37 montegov_admin
-rwxrwxrwx 1 root root 0 Nov 3 2016 multi-master.info
drwxrwxrwx 1 root root 20480 Sep 3 2019 mysql
drwxrwxrwx 1 root root 0 Sep 3 2019 performance_schema
drwxrwxrwx 1 root root 86016 Mar 25 20:06 rentmaxpro_wp187
drwxrwxrwx 1 root root 0 Sep 3 2019 test
drwxrwxrwx 1 root root 32768 Nov 3 2016 trustedhomerenta_admin
drwxrwxrwx 1 root root 32768 Nov 3 2016 trustedhomerenta_demo
drwxrwxrwx 1 root root 40960 Mar 25 21:05 trustedhomerenta_meta
drwxrwxrwx 1 root root 36864 Mar 25 21:25 trustedhomerenta_montego
drwxrwxrwx 1 root root 36864 Mar 26 20:37 trustedhomerenta_testmontego
The problem is that the external drive is configured as ntfs.
Mysql requires the files and directory to be owned by mysql:mysql but since the ntfs does not have the same system of owners and groups as linux, the linux mount process asigns its own owner and group to the filestructure when mounting the drive. By defualt this ends up being root:root so mysql cannot use them.
ntfs does not allow CHOWN to work, so there is no way to change the ownership away from root.
One solution is to backup all the files, repartition as EXT4, and then restore all the files.
The solution I finally used was to specify mysql as the owner and group at the time that the drive is being mounted. Thus my /etc/fstab file was changed to:
ID=C2CA68D9CA68CB6D /media/USBHDD2 ntfs users,exec,uid=mysql,gid=mysql 0 2
and now mysql starts properly at boot.
phew ;-)
Thanks #danblack for getting me thinking in the right direction
I'm working in company where by they are using kvm virtualisation
[root#601 log]# virsh list --all --title
Id Name State Title
----------------------------------------------------------------------------------
2 reporting-pilosa07 running 10.3.6.172
3 reporting-pilosa09 running 10.3.6.173
4 reporting-pilosa11 running 10.3.6.174
5 reporting-pilosa13 running 10.3.6.175
6 reporting-pilosa05 running 10.3.6.171
the VMs are running, but time to time, they dead for some reason and I would like to look at individual VM logs
[root#601 qemu]# ls -ltr
total 32
-rw------- 1 root root 2341 Oct 15 2018 reporting-pilosa07.log
-rw------- 1 root root 2341 Oct 15 2018 reporting-pilosa09.log
-rw------- 1 root root 2341 Oct 15 2018 reporting-pilosa11.log
-rw------- 1 root root 2341 Oct 15 2018 reporting-pilosa13.log
-rw------- 1 root root 4885 Nov 12 2018 reporting-pilosa05.log
-rw------- 1 root root 7181 Jul 25 04:14 offlineonboarder02.log
[root#601 qemu]# pwd
/var/log/libvirt/qemu
The logs are not logging since a year back. Where can I enable back the logs so that I can observe the cause to why the VMs went dead?
Thanks.
mysqld: [ERROR] Could not open required defaults file
I'm trying to configure MySQL-Cluster with the Auto-Installer. When I deploy and start the cluster I get the following message when starting the sql nodes
Error 1
So I checked the permissions of the files. ls -l looks like this:
drwxrwxr-x 3 mysql ubuntu 4096 Sep 10 03:00 1
drwxrwxr-x 3 mysql ubuntu 4096 Sep 10 03:00 2
drwxrwxr-x 2 mysql ubuntu 4096 Sep 10 03:30 49
drwxrwxr-x 5 mysql ubuntu 4096 Sep 10 03:03 53
And inside the 53 folder:
-rw-rw-r-- 1 mysql ubuntu 214 Sep 10 03:25 my.cnf
drwxrwxr-x 2 mysql ubuntu 4096 Sep 10 03:00 mysql
drwxrwxr-x 2 mysql ubuntu 4096 Sep 10 03:00 test
drwxrwxr-x 2 mysql ubuntu 4096 Sep 10 03:00 tmp
I have tried to run the command manually and I get the following message:
ubuntu#mysql-cluster-1:~/MySQL_Cluster/53$ !41
/usr/sbin/mysqld --defaults-file=/home/ubuntu/MySQL_Cluster/53/my.cnf
mysqld: [ERROR] Could not open required defaults file: /home/ubuntu/MySQL_Cluster/53/my.cnf
mysqld: [ERROR] Fatal error in defaults handling. Program aborted!
The file my.cnf contains this configuration:
#
# Configuration file for test1
# Generated by mcc
#
[mysqld]
log-error=mysqld.53.err
datadir="/home/ubuntu/MySQL_Cluster/53/data"
tmpdir="/home/ubuntu/MySQL_Cluster/53/tmp"
basedir="/usr/"
port=3306
ndbcluster=on
ndb-nodeid=53
ndb-connectstring=10.142.0.2:1186,
socket="/home/ubuntu/MySQL_Cluster/53/mysql.socket"
ndb-wait-setup=120
ndb-batch-size=32768
ndb-blob-read-batch-bytes=65536
ndb-blob-write-batch-bytes=65536
ndb-deferred-constraints=0
ndb-log-apply-status=0
ndb-log-empty-epochs=0
ndb-log-empty-update=0
ndb-log-exclusive-reads=0
Edit 1: I'm using Ubuntu 18.04.1 and MySQL Cluster 7.6.7 installed with the .deb files
I was playing with docker and mounted my local mysql to a docker container and connected MySql-Workbench so I could view the DB (experimenting) here is the command I ran.
docker run -d --name alldb-mysql -v /var/lib/mysql:/var/lib/mysql -e MYSQL_USER=root -e MYSQL_PASSWORD=password -p 3306:3306 mysql:latest
after I stopped my container and removed it, I can't start/restart mysql (local install). when I run sudo /etc/init.d/mysql start it returns
[....] Starting mysql (via systemctl): mysql.serviceJob for mysql.service failed because the control process exited with error code. See "systemctl status mysql.service" and "journalctl -xe" for details.
failed!
so I checked systemctl status mysql.service
mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
Active: activating (start-post) (Result: exit-code) since Mon 2017-04-03 22:26:15 IST; 26s ago
Process: 5470 ExecStart=/usr/sbin/mysqld (code=exited, status=1/FAILURE)
Process: 5465 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
Main PID: 5470 (code=exited, status=1/FAILURE); : 5471 (mysql-systemd-s)
Tasks: 2
Memory: 1.6M
CPU: 222ms
CGroup: /system.slice/mysql.service
└─control
├─5471 /bin/bash /usr/share/mysql/mysql-systemd-start post
└─6579 sleep 1
Apr 03 22:26:15 n mysqld[5470]: 2017-04-03T21:26:15.980564Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explici
Apr 03 22:26:15 n mysqld[5470]: 2017-04-03T21:26:15.980614Z 0 [Warning] Can't create test file /var/lib/mysql/n.lower-test
Apr 03 22:26:15 n mysqld[5470]: 2017-04-03T21:26:15.980638Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.17-0ubuntu0.16.04.1) starting as process 5470 .
Apr 03 22:26:15 n mysqld[5470]: 2017-04-03T21:26:15.981928Z 0 [Warning] Can't create test file /var/lib/mysql/n.lower-test
Apr 03 22:26:15 n mysqld[5470]: 2017-04-03T21:26:15.981936Z 0 [Warning] Can't create test file /var/lib/mysql/n.lower-test
Apr 03 22:26:15 n mysqld[5470]: 2017-04-03T21:26:15.982122Z 0 [ERROR] failed to set datadir to /var/lib/mysql/
Apr 03 22:26:15 n mysqld[5470]: 2017-04-03T21:26:15.982142Z 0 [ERROR] Aborting
Apr 03 22:26:15 n mysqld[5470]: 2017-04-03T21:26:15.982162Z 0 [Note] Binlog end
Apr 03 22:26:15 n mysqld[5470]: 2017-04-03T21:26:15.982213Z 0 [Note] /usr/sbin/mysqld: Shutdown complete
Apr 03 22:26:15 n systemd[1]:
I also tried to login mysql with my detail:
mysql -uroot -ppassword1 which returned
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
when I run the following command ls -la /var/lib/ | grep mysql on the var lib directory it returns.
drwx------ 12 root docker 4096 Apr 3 21:17 mysql
drwx------ 2 mysql mysql 4096 Feb 23 22:35 mysql-files
drwx------ 2 mysql mysql 4096 Feb 23 22:35 mysql-keyring
drwxr-xr-x 2 root root 4096 Jan 18 21:45 mysql-upgrade
By the looks of things I (well docker did) messed up my ownership on my mysql directory.
If I run ls -la /var/lib/mysql it returns
ls: cannot open directory '/var/lib/mysql': Permission denied
and running same command with sudo sudo ls -la /var/lib/mysql
it returns
total 188480
drwx------ 12 root docker 4096 Apr 3 21:17 .
drwxr-xr-x 80 root root 4096 Mar 29 19:28 ..
-rw-r----- 1 guest-okauxv docker 56 Feb 23 22:35 auto.cnf
drwxr-x--- 2 guest-okauxv docker 4096 Mar 24 23:51 concretepage
-rw-r--r-- 1 guest-okauxv docker 0 Feb 23 22:35 debian-5.7.flag
drwxr-x--- 2 guest-okauxv docker 4096 Mar 25 00:10 myfidser
drwxr-x--- 2 guest-okauxv docker 4096 Mar 4 00:54 myotherFliDB
drwxr-x--- 2 guest-okauxv docker 4096 Mar 1 12:33 testFFAPI
-rw-r----- 1 guest-okauxv docker 679 Apr 3 21:16 ib_buffer_pool
-rw-r----- 1 guest-okauxv docker 79691776 Apr 3 21:17 ibdata1
-rw-r----- 1 guest-okauxv docker 50331648 Apr 3 21:17 ib_logfile0
-rw-r----- 1 guest-okauxv docker 50331648 Feb 23 22:35 ib_logfile1
-rw-r----- 1 guest-okauxv docker 12582912 Apr 3 21:17 ibtmp1
drwxr-x--- 2 guest-okauxv docker 4096 Feb 23 22:35 mysql
drwxr-x--- 2 guest-okauxv docker 4096 Mar 25 16:58 NodeRestDB
drwxr-x--- 2 guest-okauxv docker 4096 Feb 23 22:35 performance_schema
drwxr-x--- 2 guest-okauxv docker 12288 Feb 23 22:35 sys
drwxr-x--- 2 guest-okauxv docker 4096 Mar 29 10:34 testDB
drwxr-x--- 2 guest-okauxv docker 4096 Mar 1 11:52 demoDB
By the looks of this, I (well docker did) managed to changed the owner and group of all the directories in mysql directory.
Do I need to do a complete reinstall of MySQL Server?
What is the simplest, easiest way to fix this?
your help will be much appreciated.
Updated with FIX
Just what Andy Shinn said in point one, I just ran sudo chown -R
mysql:mysql /var/lib/mysql to change the owner back and started mysql
by running sudo /etc/init.d/mysql start and mysql returned
[ ok ]Starting mysql (via systemctl): mysql.service.
G
Some more information will be needed. But two things come to mind.
It is likely the permission issue. Based on your output, why don't you first just try changing the owner and group back to mysql? This should be a simple sudo chown -R mysql:mysql /var/lib/mysql.
It is possible that running mysql:latest image is a different version of MySQL that you were running locally and it could have upgraded the MySQL data to newer formats which may be incompatible with older versions. Check that the version of MySQL you were running locally is the same that the mysql:latest image tag points to (at least the same minor version eg. 5.6 and 5.6).
What version is mysql:latest and what version were you running locally? Do you have any log output from the MySQL container that you started?