Auto-Dump Files genereted? - mysql

I have a problem with MySQL in which the auto-generating of many dump file stored on the log file, and it cause a problem in my internal storage.
Please if you have any solution.

It sounds like you might have enabled the InnoDB Monitor, which outputs statistics and reports to the error log file every 15 seconds.
https://dev.mysql.com/doc/refman/8.0/en/innodb-enabling-monitors.html
Disable the monitors with these statements:
SET GLOBAL innodb_status_output=OFF;
SET GLOBAL innodb_status_output_locks=OFF;
In older versions of MySQL, instead of configuration options, they used an odd way of enabling the monitor. It was to create a table in the InnoDB engine named innodb_monitor or innodb_lock_monitor or innodb_tablespace_monitor. It didn't matter what columns were in this table or which schema you created it in. The existence of a table by those names would start the monitor output to the logs. You need to use either DROP TABLE or RENAME TABLE for each of those to disable the monitor output.
In addition, you should configure your MySQL Server to do log rotation, so the error log never fills the storage. There are many blogs describing how to do this, here's one example: https://scalegrid.io/blog/managing-mysql-server-logs-rotate-compress-retain-delete/

Related

Will remove MySQL logs file impact the performance of the database?

One of our users is using Cloud SQL(MySQL).
They turn on general logs flag and log_output is file.
They need these general logs because of some special circumstances.
MySQL generates about 8TB of general logs and these logs result a bigger use of the disk.
Here is the tricky part:
They want to remove these general logs file [1] to decrease the size of the disk.
However, this is their production database. They afraid this operation will impact their database's performance.
Since these log files are located in /var/log/mysql.log, the remove logs operation will execute on the OS level, right? -> This is the part we are not so sure.
If our user executes this truncateAPI, will this operation affect their database's performance?
Is there any best practice for this kind of situation?
P.S: Our user doesn't want to turn off general logs flag. They will try to truncate these logs once for a while. But for now, they need to truncate the huge amount of logs that they accumulated in the past few momths.
[1] https://cloud.google.com/sql/docs/mysql/admin-api/v1beta4/instances/truncateLog
I understand that you have turned on general logs flag and log_output is FILE and you want to remove these general logs files to decrease the size of the disk.
According to to the official documentation link:
To make your general or slow query logs available, enable the
corresponding flag and set the log_output flag to FILE. This makes the
log output available using the Logs Viewer in the Google Cloud
Platform Console. Note that Stackdriver logging charges apply.
If log_output is set to NONE, you will not be able to access the logs.
If you set log_output to TABLE, the log output is placed in a table in
your database. If this table becomes large, it can affect instance
restart time or cause the instance to lose its SLA coverage; for this
reason, the TABLE option is not recommended. If needed, you can
truncate your log tables by using the API. For more information, see
the instances.truncateLog reference page.
Instances: truncateLog truncate MySQL general and slow query log tables.
If I understood correctly, you can not "truncate the huge amount of logs that they accumulated in the past few months" because you did not set log_output to TABLE, therefore there are no tables to be truncated.
Regarding database performance: TRUNCATE TABLE Statement
On a system with a large InnoDB buffer pool and
innodb_adaptive_hash_index enabled, TRUNCATE TABLE operations may
cause a temporary drop in system performance due to an LRU scan that
occurs when removing an InnoDB table's adaptive hash index entries.
The problem was addressed for DROP TABLE in MySQL 5.5.23 (Bug
13704145, Bug #64284) but remains a known issue for TRUNCATE TABLE (Bug #68184).
Here you can check MySQL Server Log Maintenance.
Removing MySQL general_log FILES should not impact the performance of the database.

Track all the DML/DDL changes of DB in a log table using trigger in Mysql

I would like to track all the DB changes happening on particular DB using one log table.
I have checked many solutions but they all give one audit table for each table in DB. How can we track them in one single table with the help of a trigger?
Table columns may have like :
id - primary key
db_name -- DB Name
version, -- Ignore it(i have a column in my table)
event_type, -- DDL/DML command name
object_name, -- Table/Procedure/Trigger/Function name which is changed
object_type, -- TYpe like table,procedure,trigger
sql_command, -- query executed by user
username, -- who executed it
updated_on -- timestamp
Thanks in advance.
A trigger that is called when ddl commands are executed (so you can log them) does not exist in mysql. But you may want to use logfiles, especially the The General Query Log:
The general query log is a general record of what mysqld is doing. The server writes information to this log when clients connect or disconnect, and it logs each SQL statement received from clients. The general query log can be very useful when you suspect an error in a client and want to know exactly what the client sent to mysqld.
The log is disabled by default, and enabling it may reduce performance a bit. And it will not include indirect changes (e.g. ddls executed inside a procedure).
If you can install a plugin, a slightly more configurable (and more performant) alternative would be to use an audit plugin, see MySQL Enterprise Audit, or any free implementation, e.g. this one, or you can write your own, but it will basically log the same things as the general log.
Another great source of information might be the information schema and the performance schema. From there you can collect basically every information you need (especially the log of recently executed queries) and generate your log table from that, but it would require some work to gather all the data you want - and it will not be triggered by actions, so you have to periodically check for changes yourself (e.g. compare the data in INFORMATION_SCHEMA.TABLES with a saved copy to keep track of added, deleted and renamed tables).
On the other hand, a periodically mysql_dump followed by a diff to the most recent version might be a lot easier.

Database dumping in mysql after certain checkpoints

I want to get mysqldump after certain checkpoint e.g. if i take the mysqldump now then next time when i will take the dump it should give me only the commands which executed between this time interval. is there anyway to get this using mysqldump.
One more thing how to show the commands delete, update in the mysqldump files.
Thanks
I dont think this is possible from a MySQLdump, however that feature exists as part of MySQL core - its called Binlogging or binary logging.
The binary log contains “events” that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes (for example, a DELETE which matched no rows). The binary log also contains information about how long each statement took that updated data
Check this out http://dev.mysql.com/doc/refman/5.0/en/binary-log.html
Word of warning, binlogs can slow down the performance of your server.

How to fetch the data from binary log file and insert in our desired table in MySQL?

How to fetch the data from binary log file and insert in our desired table in MySQL?
I am on my way of scripting a PHP code for Audit Trail, in this I encountered a situation that if there will be new table created then I will not be available with triggers for that new table and hence no tracking could be done for that, so if I code it to create three new triggers for this new table, then how will get the last change done in this table? Hence I found that Binary Log File can be helpfull for me in this case, to fetch the last change for this new table and insert it in tracking table... BUT HOW????
If you're talking about the MySQL binary log file (mysql-bin), it wasn't designed to be read by anything other than MySQL - it's a transaction log file. The data in the log file will most of the time already be in your database by the time you read it.
Perhaps if you edit your answer to provide more information about what it is you're trying to achieve, you may get a better answer and solution.
EDIT:
Parsing the binary log file is going to give you more headaches - it's an internal file for MySQL and is known to change between releases. It also changes format depending on how the server is configured (row-based/statement-based/mixed format.) Server administrators can also disable binary logging completely.
If you can take the performance hit, you may be better off logging all queries - you can have these written to a file, or even to a database table (although in early versions of MySQL 5.1 there were severe performance hits for this; it may still be the case.) This logs all SQL queries received from clients, so you can check for the CREATE TABLE query and all statements amending data in this table.
http://dev.mysql.com/doc/refman/5.1/en/query-log.html

MySQL database size

Microsoft SQL Server has a nice feature, which allows a database to be automatically expanded when it becomes full. In MySQL, I understand that a database is, in fact, a directory with a bunch of files corresponding to various objects. Does it mean that a concept of database size is not applicable and a MySQL database can be as big as available disk space allows without any additional concern? If yes, is this behavior the same across different storage engines?
It depends on the engine you're using. A list of the ones that come with MySQL can be found here.
MyISAM tables have a file per table. This file can grow to your file system's limit. As a table gets larger, you'll have to tune it as there's index and data size optimizations that limit the default size. Also, this MyISAM documentation page says:
There is a limit of 2^32 (~4.295E+09)
rows in a MyISAM table. If you build
MySQL with the --with-big-tables
option, the row limitation is
increased to (2^32)^2 (1.844E+19) rows.
See Section 2.16.2, “Typical configure
Options”. Binary distributions for
Unix and Linux are built with this
option.
InnoDB can operate in 3 different modes: using innodb table files, using a whole disk as a table file or using innodb_file_per_table.
Table files are pre-created per your MySQL instance. You typically create a large amount of space and monitor it. When it starts filling up, you need to configure another file and restart your server. You can also set it to autoextend, so that it will add a chunk of space to the last table file when it starts to fill up. I typically don't use this feature, as you never know when you'll take the performance hit for extending the table. This page talks about configuring it.
I've never used a whole disk as a table file, but it can be done. Instead of pointing to a file, I believe you point your InnoDB table files at the un-formatted, unmounted device.
innodb_file_per_table makes InnoDB tables act like MyISAM tables. Each table gets its own table file. Last time I used this, the table files did not shrink if you deleted rows from them. When a table is dropped or altered, the file resizes.
The Archive engine is a gzipped MyISAM table.
A memory table doesn't use disk at all. In fact, when a server restarts, all the data is lost.
Merge tables are like a poor man's partitioning for MyISAM tables. It causes a bunch of identical tables to be queried as if there were one. Aside from the FRM table definition, no files exist other than the MyISAM ones.
CSV tables are wrappers around CSV files. The usual file system limits apply here. They are not too fast, since they can't have indexes.
I don't think anyone uses BDB any more. At least, I've never used it. It uses a Berkly database as a back end. I'm not familiar with its restrictions.
Federated tables are used to connect to and query tables on other database servers. Again, there is only an FRM file.
The Blackhole engine doesn't store anything locally. It's used primarily for creating replication logs and not for actual data storage, since there is no data storage :)
MySQL Cluster is completely different: it stores just about everything in memory (recent editions allow disk storage) and is very different from all the other engines.
what you describe is roughly true for MyISAM tables. for InnoDB tables the picture is different, and more similar to what other DBMSs do: one (or a few) big file with complex internal structure for the whole server. to optimize it, you can use a whole disk (or partition) as a file. (at least in unix-like systems, where everything is a file)