I server I am working on, general mysql log table is taking near 200GB space, which is huge. So, I am planning to clear it up with:
TRUNCATE table mysql.general_log
Is it ok? Will it cause any issue? I am concerned as the server is live and big application. Thanks.
It will definitely cause a problem unless it is disabled and then you truncate. If you truncate while it is enabled. Truncate will lock the table if table is huge, since mysql.general_log engine is either CSV or MyISAM ,meanwhile newly made entries will try to be written in general log table causing a lock. So for safety do like this
mysql> SET GLOBAL general_log=OFF;
mysql> TRUNCATE table mysql.general_log;
mysql> SET GLOBAL general_log=ON;
It won't cause problems, but you will lose all the old log entries, so Ravindra's advice above is good.
You can do a backup with:
mysqldump -p --lock_tables=false mysql general_log > genlog.sql
Do you need to have the general log on all the time? I usually only turn it on when I'm troubleshooting performance issues. MySQL logs EVERYTHING there (client connects, disconnects, and EVERY statement). In most systems, the logs get very big very quickly. There is also some performance overhead for this.
Related
One of our users is using Cloud SQL(MySQL).
They turn on general logs flag and log_output is file.
They need these general logs because of some special circumstances.
MySQL generates about 8TB of general logs and these logs result a bigger use of the disk.
Here is the tricky part:
They want to remove these general logs file [1] to decrease the size of the disk.
However, this is their production database. They afraid this operation will impact their database's performance.
Since these log files are located in /var/log/mysql.log, the remove logs operation will execute on the OS level, right? -> This is the part we are not so sure.
If our user executes this truncateAPI, will this operation affect their database's performance?
Is there any best practice for this kind of situation?
P.S: Our user doesn't want to turn off general logs flag. They will try to truncate these logs once for a while. But for now, they need to truncate the huge amount of logs that they accumulated in the past few momths.
[1] https://cloud.google.com/sql/docs/mysql/admin-api/v1beta4/instances/truncateLog
I understand that you have turned on general logs flag and log_output is FILE and you want to remove these general logs files to decrease the size of the disk.
According to to the official documentation link:
To make your general or slow query logs available, enable the
corresponding flag and set the log_output flag to FILE. This makes the
log output available using the Logs Viewer in the Google Cloud
Platform Console. Note that Stackdriver logging charges apply.
If log_output is set to NONE, you will not be able to access the logs.
If you set log_output to TABLE, the log output is placed in a table in
your database. If this table becomes large, it can affect instance
restart time or cause the instance to lose its SLA coverage; for this
reason, the TABLE option is not recommended. If needed, you can
truncate your log tables by using the API. For more information, see
the instances.truncateLog reference page.
Instances: truncateLog truncate MySQL general and slow query log tables.
If I understood correctly, you can not "truncate the huge amount of logs that they accumulated in the past few months" because you did not set log_output to TABLE, therefore there are no tables to be truncated.
Regarding database performance: TRUNCATE TABLE Statement
On a system with a large InnoDB buffer pool and
innodb_adaptive_hash_index enabled, TRUNCATE TABLE operations may
cause a temporary drop in system performance due to an LRU scan that
occurs when removing an InnoDB table's adaptive hash index entries.
The problem was addressed for DROP TABLE in MySQL 5.5.23 (Bug
13704145, Bug #64284) but remains a known issue for TRUNCATE TABLE (Bug #68184).
Here you can check MySQL Server Log Maintenance.
Removing MySQL general_log FILES should not impact the performance of the database.
I ran the analyze table command on production mysql db without knowing it would prevent me from selecting the contents of the table. This caused production site to go down :( How long can it take for the lock to release? Also, would recreating the db from a backup solve the problem / get rid of the locks?
Please let me know.
Thanks.
ANALYZE TABLE waits to acquire a metadata lock. While it's waiting, any SQL query against the table waits for ANALYZE TABLE.
ANALYZE TABLE is normally pretty quick, i.e. 1-3 seconds. But that quick operation doesn't start until it can acquire the metadata lock.
It can't acquire the metadata lock while you have long-running transactions going against the table. So if you want this to run faster, finish your transactions.
See my answer to MySQL failing to ALTER TABLE which is being actively written to for more information.
ANALYZE TABLE quite clearly says 'During the analysis, the table is locked with a read lock for InnoDB and MyISAM'.
You can KILL {connection number} in SQL to stop the command.
Note: you probably should update to a more recent version of MySQL-5.6.
Some relevant my.cnf settings:
binlog-format=ROW
init_connect='SET autocommit=1'
autocommit=1
innodb_flush_log_at_trx_commit=1
I also have a replication running... Now, most of the time things runs rather well.
But sometimes I do get this:
Could not execute Delete_rows/Update_rows event on table auto.parcels_to_cache; Can't find record in 'parcels_to_cache'.
This is because of this:
mysql-bin.000021.decoded-26373095-### DELETE FROM auto.parcels_to_cache
mysql-bin.000021.decoded-26373096-### WHERE
mysql-bin.000021.decoded-26373097-### #1='0101'
mysql-bin.000021.decoded-26373098-### #2='2013:01:05'
mysql-bin.000021.decoded:26373099:### #3='01014700669249'
--
mysql-bin.000022.decoded-4143326-### INSERT INTO auto.parcels_to_cache
mysql-bin.000022.decoded-4143327-### SET
mysql-bin.000022.decoded-4143328-### #1='0101'
mysql-bin.000022.decoded-4143329-### #2='2013:01:05'
mysql-bin.000022.decoded:4143330:### #3='01014700669249'
This is a decoded binary log from the master server. The replication server reflects this.
Also this seems only to happen on InnoDB tables. But not always. Although I think the MyISAM problems with replication I had were related to another problem.
I recently recoded all the sources to remove the few transactions I had in there to remove all of them. So no begins, no commits, no rollbacks anymore... Then I added into the mysql database class to always turn off commits as well.
This because I read on the MySQL website there were issues with transactions & transactionable and non-trans tables.
For example this auto.parcels_to_lifecycle table is heavily used, sometimes it is possibly accessed by 20 threads at once. Hence the InnoDB. Otherwise each thread will wait for when only 1 thread is updating...
Anyone knows how to fix this DELETE before INSERT problem? Or maybe some way to approach the problem and fix it?
Thanks!
I'm having a problem on a master MySQL (5.0, Linux) server: I tried to add a comment to a table row, which translates into an ALTER TABLE command. Now the process is stuck on 'copy to tmp table', copying the 100'000'000+ rows. Disk IO usage is uncomfortably high.
Since the master is using replication, I'm unsure if I can kill this process. The slaves haven't seen the ALTER TABLE command yet.
(To make this clear: I'm talking about killing the process from the MySQL-PROCESSLIST, not the MySQL-Daemon-process itself.)
Yes you can kill it - the ALTER won't make it into the binlogs until the transaction is committed, i.e. until the ALTER is finished. So the slaves won't see nor execute it, and the master will rollback to the old table structure.
You can easily verify that the ALTER is not yet in the binlogs by using show binlog events or the mysqlbinlog utility.
No, it's not safe. Only if you are having a full backup(recently) of the database, to restore in case of a problem.
There might be some locks and you end up after that with tables locked, possible damage on the keys.
As advice if you add new columns for such a large database. It's easier
to create a copycat of the table schema
run the alter on that table while it's empty,
populate from the original with a insert into ... () select fields from ....
This is much faster. Then obviously rename the table to original.
You can kill the operation, but two things can happen:
the slave schema is inconsistent with the master (and therefore:)
the replication on the slaves may stop.
When the replication stops, you can try to manually fix the slave(s) by skipping over the 'alter table' instruction by entering on the slave server:
SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1; SLAVE START;
sometimes i get an error like "table is marked as corrupt and shld be repaired". that DB (tables) is using MyISAM. recently that keeps happening. what could be the causes? most recently i am executing a batch insert
INSERT INTO table (..., ..., ...) VALUES (...), (...), (...) ...
and it just hung. or took very long to complete it seems hung to me. the next day, when i checked the table was marked as corrupt again. when i try to use mysqlcheck -r it said all tables OK when it reached that "corrupt" table it hung there again...
so, what can i do to prevent this. and what could be the causes. the DB is hosted 3rd party, how can i debug this?
is InnoDB a more reliable engine to use? i heard MyISAM is faster but others say InnoDB can be fast also but it takes abit more to optimize it. can i conclude that InnoDB is something more reliable but abit slower overall even with optimization?
If your tables get corrupt, you can use the repair table command to fix them:
REPAIR TABLE table;
If you run myisamchk while the server is still running (and inserts/selects are hitting the table), it could be what is corrupting your tables. Most of the corruption issues I run into are when trying to do things outside the server (copying the files, etc) while it is still running.
InnoDB is slower for read only databases, because it has features (ACID compliant, row level locking) that MyISAM leaves out. However, if you are doing a mixture of reads and writes, depending on the mixture, then InnoDB can offer serious performance improvements, because it doesn't have to lock the entire table to do a write. You also don't run into corruption issues.
Go with InnoDB.
ok, so the problem was the company's db exceeded the storage space allowed by the hosting company. so apparently noone told the company they exceeded the usage ... lousy host i guess.
btw, theres no way mysql could have known abt this?