mysql-Docker with separate file-based log files and log rotation - mysql

So I have a mysql Docker up and running with 3 log files (general, error, slow-query log) enabled, that are written to /var/log/mysql/ (path inside the mysql container), which actually is a directory on the docker host (named 'log') and mounted into the container as a volume specified in the docker-compose.yml.
We chose this way, because we didn't want general and slow-query logs combined on stdout and we prefer a daily rotation of the 3 separate log files, since it seems more comfortable to us to find a certain query that was issued - let's say - 4 days ago.
Since the mysql Docker (afaik) doesn't come with logrotate and/or cron, we decided to have another service in the docker-compose.yml named logrotator, which starts cron in it's entrypoint, which in turn regularly runs logrotate with a given logrotate.conf. The 'log' directory is also mounted into the logrotator container, so it can do it's rotation job on the mysql log files.
Now it seems like mysql needs a "mysqladmin flush-logs" after each rotation to start writing into a new file descriptor, but the logrotator container cannot issue this command inside the mysql container.
To make it short(er): I'm sure there are better ways to accomplish separate log files with log rotation. Just how? Any ideas are much appreciated. Thanks.
Update:
Since we're using mysql 5.7 as of now, and hence probably cannot solve our issue by the solution as proposed by #buaacss (which might absolutely work), we decided to stay with a "cron" container. Additionally we installed docker.io inside the cron container and mounted the docker host's /var/run/docker.sock into the cron container. This allows us to use "docker exec" to issue commands (in this case 'mysqladmin flush-logs') from the cron container to be executed in the mysql container. Problem solved.

you can indeed use SIGHUP instead of flush log statement based on doc
https://dev.mysql.com/doc/refman/5.6/en/log-file-maintenance.html
but may have some undesired effects, i.e. write huge report information to the error log.
so, as I mentioned in comment, they developed a light version of SIGHUP, i.e. SIGUSR1 to accomplish functions below
FR1: When SIGUSR1 is sent to the server, it must flush the error log.
FR2: When SIGUSR1 is sent to the server, it must flush the general log.
FR3: When SIGUSR1 is sent to the server, it must flush the slow query log.
FR4: SIGUSR1 must not send MySQL status report.
Currently when SIGHUP is sent to the server a large report of information is
printed to stdout, the status report.
FR5: The server must not fail when SIGUSR1 is sent, even though slow log is not
enabled.
FR6: The server must not fail when SIGUSR1 is sent, even though slow log output
is set to a table (log_output).
FR7: The server must not fail when SIGUSR1 is sent, even though general log is
set to OFF.
NFR1: SIGALRM must be undisguisable from how SIGUSR1 behaved before.
unfortunately such signal is only available in MySQL 8 or above

Related

"Too many open files" error during backward ingest of data in influxdb v2 on rootless containerd

I am trying to figure out how resource limits in rootless container environments are handled. I am running influxdb:latest (tried :alpine as well) as a rootless container with nerdctl and containerd an an up-to-date ubuntu host. While all other containers on the host and influx startup are running just fine, I get "too many open files" errors when ingesting about 3 years of data into the database. Data is sampled on a 24h basis, all in all 926 days with 5 points each. Logs indicate that errors arise during shard creation.
Form what I have read during the last hours, when running rootless containers the container 'root' user gets mapped to an UID from my personal user's namespace on the host system. From my understanding the container's 'root' should therefore be inheriting my host user's limits on open files and processes. ulimit -u gives 14959 as open files limit. lsof -u myuser | wc -l indicates about 833 open files during normal operation. Accordingly, there should be plenty of room for opening more files for the influxdb container. The errors start to happen during creation of shards numbering below 100 on a newly set up database.
After hours of testing and googling I as able to mitigate my problem by setting --ulimit nofile=5000:5000 when running the influxdb container.
Can someone explain why this is necessary even though my user has a limit set way above what should be needed an is indicated by --ulimit?
Already tried mounting the influx data folder as a volume, as bind mount and already tried without mounting at all. The issue stays all the same, independent of how storage is handled. My impression is, that the container gets started with a fairly low number of open files allowed. But why? Could not find anything related in neither documentation.

Is my MySQL being monitored by monit using nginx?

I'm unsure if my mysql is actually being monitor by monit. See screenshot.
As you can see under processes the mysqld process is not being monitored (it failed a sew times first) but under files there is mysql_bin and mysql_rc both of which are OK.
Is it safe to remove the mysql monitoring symbolic link or do i need this anyway?
thanks
Short answer is no. Some more info:
That both of the entries in the File section are OK does not relate to any kind of working or running application. The File section of monit simply checks file state information, such as, but not limited to, last modification time, size, file hashes, etc.
So basically the two OKs in File section just tell you that the mysql files are present and have not been changed.
The entry inside Process section is what you want to focus an. It checks the presence of a running mysqld process on your system. You need to check if the configuration of that entry inside monitrc (or included files) is looking for the right parameters. It should watch out for a process using a pidfile and could potentially check if a connection could be established.
See monit docs on MySQL and/or paste your monit config for any in-depth help regarding monit.
PS: #Shadow: The question (more likely: the resulting problem) has nothing to do with a DB, but is a question about monitoring.

MySQL, recover database after mysqlbinlog loading error?

I am copying a MySQL DB from an initial dump file and the set of binlogs created after the dump.
The initial load from the dump is fine. Then, while loading the binlogs using mysqlbinlog, what happens is that one of the files will fail, for example, with a "server has gone away" error.
Is there any way to recover from a failed mysqlbinlog run, or is the database copy now irreparably corrupted? I know which log has failed, but I can't just rerun that log since the error could have occurred at any query within the log.
Is there a way to handle this moving forward?
I can look into minimizing the chances that there will be an error in the first place, but it doesn't seem like much of a recovery process (or master/slave process) if any MySQL issue during the loading completely ruins the database. I feel that I must be missing something.
I'd check the configuration value for max_allowed_packet. This is pretty small by default (4MB or 64MB depending on MySQL version). You might need to increase it.
Note that you need to increase that option both in the server and in the client that is applying binlogs. The effective limit on packet size is the lesser of the server and client's configuration value.
Even if the binlog succeeded through replication, it might not succeed when replaying binlogs, because you need to replay with mysql while specifying the --max-allowed-packet option.
See https://dev.mysql.com/doc/refman/8.0/en/gone-away.html for more explanation of the error you got.
If you don't know the binlog coordinates of the last binlog event that succeeded, you'll have to start over: remove the partially-restored instance and restore from the backup again, then apply the binlog.

Google Cloud SQL instance always in Maintenance status & Binary logs issue

I've had some of Google Cloud SQL MySQL 2nd Gen 5.7 instances with failover replications. Recently I noticed that the one of the instance overloaded with the storage overloaded with binlogs and old binlogs not deleted for some reason.
I tried restart this instance but it wont start since 17 March.
Normal process with binlogs on other server:
Problem server. Binlogs not clearing and server wont start and always under maintenance in the gcloud console.
Also I created one other server with same configuration and not binlogs never clearing. I have already 5326 binlogs here when on normal server I have 1273 binlogs and they are clearing each day.
What I tried with the problem server:
1 - delete it from the Google Cloud Platform frontend. Response: The instance id is currently unavailable.
2 - restart it with the gcloud command. Response: ERROR: (gcloud.sql.instances.restart) HTTPError 409: The instance or operation is not in an appropriate state to handle the request. Same response on any other command which I sent with the gcloud.
Also I tried to solve problem with binlogs to configure with expire_logs_days option, but it seems this option not support by google cloud sql instance.
After 3 days of digging I found a solution. Binlogs must cleared automatically when 7 days past. In 8 day it must clear binlogs. It still not deleted for me and still storage still climbing, but I trust it must clear shortly (today I guess)
As I told - SQL instance always in maintenance and can't be deleted from the gcloud console command or frontend. But this is interesting because I still can connect to the instance with the mysql command like mysql -u root -p -h 123.123.123.123. So, I just connected to the instance, deleted database which unused (or we can just use mysqldump to save current live database) and then I just deleted it. In the mysql logs (I'm using Stackdriver for this) I got a lot of messages like this: 2018-03-25T09:28:06.033206Z 25 [ERROR] Disk is full writing '/mysql/binlog/mysql-bin.034311' (Errcode: -255699248 - No space left on device). Waiting for someone to free space.... Let's me be this "someone".
When I deleted database it restarted and then it up. Viola. And now we have live instance. Now we can delete it/restore database on it/change storage for it.

Docker VS Native installation on a server. How to recover from downtime mysql/mariadb without data lloss

Fellows,
I have the following assumption:
I have an application X (let us e wordpresss) running into a docker container and links to another docker container running mysql. Annd for some reason I lose the database it there a way to recover without any data loss from the latest changes?
For some reason (not sure if true or not) any file changes into docker container it automatically is lost. Therefore is the database is into a docker container then anything that is into the db is automatically lost.
Instead if mysql is running on the server and the database is down I just an restart the server and then is ok because the changes into the filesystem are written. And before make the system live i can also force a backup to have the latest changes.
In booth cases of course somehow I managed to get a nice backup in a daily basis. And in booth cases I want the minimum data loss.
And of course I understand that maybe data corruption in the second option. I want to have the latest written data even if corrupted.