Where is the mysqld event log? - mysql

When I run mysqld, it has a whole lot of information about what it's doing.
As I understand, this is not the correct way to run a mysql server and you should use service mysql start instead (on older servers at least).
Any searches for mysqld log come up with logs for queries, I want to know what the program is doing as it starts. (I'm trying to set up mariadb 10.1.14 with galera replication)
I want to be able to run service mysql start and then watch what's happening in the background.

Related

Q: MYSQL Data tables suddenly missing

Earlier today [11-09-2021] one of our databases from our production environment suddenly dropped it's table for reasons we don't know. This happened around after 4am, since we still had a snapshot of our drive for that time, which is weird as no one was using or accessing the server at the time. Can someone tell if this normally happens?
This for sure its not normal behavior, you should check MySQL logs to see what was happening at that time.
In MySQL we need to see often 3 logs which are mostly important:
The Error Log. It contains information about errors that occur while the server is running (also server start and stop)
The General Query Log. This is a general record of what mysqld is doing (connect, disconnect, queries)
The Slow Query Log. Ιt consists of "slow" SQL statements (as indicated by its name).
The one that will be your starting point is The General Query Log.
By default no log files are enabled in MYSQL. All errors will be shown in the syslog (/var/log/syslog).
To Enable them just follow below steps:
1. Go to mysql conf file (/etc/mysql/my.cnf) and add following lines:
Enable general query log add following
general_log_file = /var/log/mysql/mysql.log
general_log = 1
2. Save the file and restart mysql using following commands
service mysql restart
To read content of the error log file in real time, run:
sudo tail -f $(mysql -Nse "SELECT CONCAT(##datadir, ##general_log_file)")
Hope this will help you to find out what actually happened on your database server.

mysql-Docker with separate file-based log files and log rotation

So I have a mysql Docker up and running with 3 log files (general, error, slow-query log) enabled, that are written to /var/log/mysql/ (path inside the mysql container), which actually is a directory on the docker host (named 'log') and mounted into the container as a volume specified in the docker-compose.yml.
We chose this way, because we didn't want general and slow-query logs combined on stdout and we prefer a daily rotation of the 3 separate log files, since it seems more comfortable to us to find a certain query that was issued - let's say - 4 days ago.
Since the mysql Docker (afaik) doesn't come with logrotate and/or cron, we decided to have another service in the docker-compose.yml named logrotator, which starts cron in it's entrypoint, which in turn regularly runs logrotate with a given logrotate.conf. The 'log' directory is also mounted into the logrotator container, so it can do it's rotation job on the mysql log files.
Now it seems like mysql needs a "mysqladmin flush-logs" after each rotation to start writing into a new file descriptor, but the logrotator container cannot issue this command inside the mysql container.
To make it short(er): I'm sure there are better ways to accomplish separate log files with log rotation. Just how? Any ideas are much appreciated. Thanks.
Update:
Since we're using mysql 5.7 as of now, and hence probably cannot solve our issue by the solution as proposed by #buaacss (which might absolutely work), we decided to stay with a "cron" container. Additionally we installed docker.io inside the cron container and mounted the docker host's /var/run/docker.sock into the cron container. This allows us to use "docker exec" to issue commands (in this case 'mysqladmin flush-logs') from the cron container to be executed in the mysql container. Problem solved.
you can indeed use SIGHUP instead of flush log statement based on doc
https://dev.mysql.com/doc/refman/5.6/en/log-file-maintenance.html
but may have some undesired effects, i.e. write huge report information to the error log.
so, as I mentioned in comment, they developed a light version of SIGHUP, i.e. SIGUSR1 to accomplish functions below
FR1: When SIGUSR1 is sent to the server, it must flush the error log.
FR2: When SIGUSR1 is sent to the server, it must flush the general log.
FR3: When SIGUSR1 is sent to the server, it must flush the slow query log.
FR4: SIGUSR1 must not send MySQL status report.
Currently when SIGHUP is sent to the server a large report of information is
printed to stdout, the status report.
FR5: The server must not fail when SIGUSR1 is sent, even though slow log is not
enabled.
FR6: The server must not fail when SIGUSR1 is sent, even though slow log output
is set to a table (log_output).
FR7: The server must not fail when SIGUSR1 is sent, even though general log is
set to OFF.
NFR1: SIGALRM must be undisguisable from how SIGUSR1 behaved before.
unfortunately such signal is only available in MySQL 8 or above

Google Cloud SQL instance always in Maintenance status & Binary logs issue

I've had some of Google Cloud SQL MySQL 2nd Gen 5.7 instances with failover replications. Recently I noticed that the one of the instance overloaded with the storage overloaded with binlogs and old binlogs not deleted for some reason.
I tried restart this instance but it wont start since 17 March.
Normal process with binlogs on other server:
Problem server. Binlogs not clearing and server wont start and always under maintenance in the gcloud console.
Also I created one other server with same configuration and not binlogs never clearing. I have already 5326 binlogs here when on normal server I have 1273 binlogs and they are clearing each day.
What I tried with the problem server:
1 - delete it from the Google Cloud Platform frontend. Response: The instance id is currently unavailable.
2 - restart it with the gcloud command. Response: ERROR: (gcloud.sql.instances.restart) HTTPError 409: The instance or operation is not in an appropriate state to handle the request. Same response on any other command which I sent with the gcloud.
Also I tried to solve problem with binlogs to configure with expire_logs_days option, but it seems this option not support by google cloud sql instance.
After 3 days of digging I found a solution. Binlogs must cleared automatically when 7 days past. In 8 day it must clear binlogs. It still not deleted for me and still storage still climbing, but I trust it must clear shortly (today I guess)
As I told - SQL instance always in maintenance and can't be deleted from the gcloud console command or frontend. But this is interesting because I still can connect to the instance with the mysql command like mysql -u root -p -h 123.123.123.123. So, I just connected to the instance, deleted database which unused (or we can just use mysqldump to save current live database) and then I just deleted it. In the mysql logs (I'm using Stackdriver for this) I got a lot of messages like this: 2018-03-25T09:28:06.033206Z 25 [ERROR] Disk is full writing '/mysql/binlog/mysql-bin.034311' (Errcode: -255699248 - No space left on device). Waiting for someone to free space.... Let's me be this "someone".
When I deleted database it restarted and then it up. Viola. And now we have live instance. Now we can delete it/restore database on it/change storage for it.

Uwsgi, MySQL restart on reboot in a wrong order

I am trying to setup a django website on EC2, basically I want to start MySQL server, and Uwsgi after reboot.
In order to make MySQL start on reboot, I did:
sudo cp /opt/mysql/server-5.6/support-files/mysql.server /etc/init.d/
sudo update-rc.d mysql.server defaults
In order to make Uwsgi start on reboot, I created a file /etc/init/uwsgi.conf:
description "ubuntu uwsgi instance"
start on runlevel [2345]
stop on runlevel [06]
exec uwsgi --ini /home/ubuntu/uwsgi.ini
However the problem is that I will need mysql to start first, right now it looks like Uwsgi starts first, and tried to connect to mysql, which fails, and mysql never gets started.
Could anyone help me on how to solve this issue?
Thanks in advance
When your computer starts up, it doesn't run the init.d scripts directly. Instead, depending on what's called the "runlevel", it runs the scripts in /etc/rcN.d (where N is the runlevel). You can determine the current runlevel with the runlevel command; mine returns 2 in normal operation. That means that when the computer started up, it ran the scripts in /etc/rc2.d. The contents of rc2.d are just symlinks to scripts in /etc/init.d, named according to whether they should be started or stopped, and the order they should be run.
Use the runlevel command to find out what runlevel your computer is at (probably 2), then look in /etc/rc2.d for a link named smthing like uwsgi, which will be a symlink to /etc/init.d/uwsgi, and rename it to zzz999 - or whatever it takes to get it to sort after the other entries - that will cause it to run last.
There's more information about init.d scripts and runlevels at https://www.linux.com/news/enterprise/systems-management/8116-an-introduction-to-services-runlevels-and-rcd-scripts
Even if you start MySQL before uWSGI you're not assured it will be available when uWSGI is managing requests.
At start MySQL does some checks on database, loads InnoDB indexes, recover from transaction log or it may even hang.
You shouldn't rely on that approach.
Instead add application logic that ensures you correctly handle unavailability of database, i.e. retrying or showing an error page to the user asking to retry.

Force MySQL Shutdown

We have MySQL 5.1.45 running on a RedHat box. We needed to shut the box down for maintenance last week and had an issue.
We first closed down all connections.
Then we ran the MySQLAdmin/shutdown command.
The database appeared to hang. We waited for 30 minutes before trying Ctrl-C which gave us an error and we had to end up physically cycling the box.
Most of the documentation I can find points to InnoDB problems, but we are using MyISAM.
Is there a better way to shut down MySQL than running the Shutdown command?
Is there a way to force MySQL to shut down when it appears to hang?
service mysql stop ? I'm not so sure though...