When was the last time a mysql table was accessed? - mysql

Is there a way to tell the last access time of a mysql table? By access I mean any type of operation in that table including update, alter or even select or any other operation.
Thanks.

You can get the last update time of a table.
SELECT update_time FROM information_schema.tables WHERE table_name='tablename'

You can use the OS level stat command.
Locate the ibd file for that particular table and run the below command
stat file_location
If the Table is being queried by SELECT, You can find the timestamp of when it was accessed with under the Access field.

I don't know how to get the exact time after-the-fact, but you can start dumping logs, do something, and then stop dumping logs. Whichever tables show up in the logs are the ones that were accessed during that time.
If you care to dig through the log, the queries are shown with timestamps.
Tell mysql where to put the log file
Add this line to my.cnf (on some systems it will be mysql.conf.d/mysqld.cnf).
general_log_file = /path/to/query.log
Enable the general log
mysql> SET global general_log = 1;
(don't forget to turn this off, it can grow very quickly)
Do the thing
All mysql queries will be added to /path/to/query.log
Disable the general log
mysql> SET global general_log = 0;
See which tables appeared
If it's short, you can just scroll through query.log. If not, then you can filter the log for known table names, like so:
query_words=$(cat mysql_general.log | tr -s [:space:] \\n | tr -c -d '[a-zA-Z0-9][:space:][_\-]' | egrep -v '[0-9]' | sort | uniq)
table_names=$(mysql -uroot -ptest -Dmeta -e"show tables;" | sort | uniq)
comm -12 <(echo $table_names) <(echo $query_words)
From there, you can grep the log file for whatever showed up in table_names. There you will find timestammped queries.
See also, this utility, which I made.

For a more detailed (db name and table name) plus the period (range), try this query:
select table_schema as DatabaseName,
table_name as TableName,
update_time as LastAccessTime
from information_schema.tables
where update_time < 'yyyy-mm-dd'
group by table_schema
order by update_time asc

Use information_schema database to find which table of respective database was updated:
SELECT UPDATE_TIME
FROM information_schema.tables
WHERE TABLE_SCHEMA = 'dbname'
AND TABLE_NAME = 'tablename'
order by UPDATE_TIME DESC

Related

MySQL checksum to all tables in a database

I am evaluating a PHP/MySQL based software.
I want to look which tables affected when certain operations triggered.
After some googling, I was told that checksum table tbl_name can do the job. I just need to know how to use checksum for all the tables in the db.
To checksum all the tables one by one manually definitely not preferred as the database contains hundreds of tables.
Checksumming all tables seems like a lot of expensive calculation work just to detect which tables changed.
I'd suggest to get this information using the sys.schema_table_statistics table.
mysql> select table_schema, table_name, rows_fetched, rows_inserted, rows_updated, rows_deleted
from sys.schema_table_statistics where table_schema='test'
+--------------+---------------------+--------------+---------------+--------------+--------------+
| table_schema | table_name | rows_fetched | rows_inserted | rows_updated | rows_deleted |
+--------------+---------------------+--------------+---------------+--------------+--------------+
| test | sysbench_results | 870 | 144 | 0 | 0 |
+--------------+---------------------+--------------+---------------+--------------+--------------+
You probably want to reset the counters between your tests. Use sys.ps_truncate_all_tables()
mysql> call sys.ps_truncate_all_tables(FALSE);
+---------------------+
| summary |
+---------------------+
| Truncated 31 tables |
+---------------------+
mysql> select table_schema, table_name, rows_fetched, rows_inserted, rows_updated, rows_deleted
from sys.schema_table_statistics where table_schema='test';
+--------------+---------------------+--------------+---------------+--------------+--------------+
| table_schema | table_name | rows_fetched | rows_inserted | rows_updated | rows_deleted |
+--------------+---------------------+--------------+---------------+--------------+--------------+
| test | sysbench_results | 0 | 0 | 0 | 0 |
+--------------+---------------------+--------------+---------------+--------------+--------------+
The sys schema comes pre-installed in MySQL 5.7.
If you use MySQL 5.6, you may need to install it yourself. It's just an SQL script that creates some views into the performance_schema. Very easy to install.
You can get the sys schema here: https://github.com/mysql/mysql-sys
I want to look which tables affected when certain operations triggered.
What do you mean by this?
Do you know what operations have been triggered, and you're merely attempting to understand what effect they had on your database (e.g. to verify their correctness)? Or do you not know what operations have been triggered (e.g. during some interval) but you nevertheless want to understand how the database has changed, perhaps in an attempt to determine what those operations were?
There are very few situations where I would expect the best approach to be that which you are exploring (inspecting the database for changes). Instead, some form of logging—whether built-in to the RDBMS (such as MySQL's General Query Log or perhaps through triggers as suggested by Sumesh), or more likely at some higher level (e.g. within the accessing application)—would almost always be preferable. This leads me to lean toward thinking you have an XY Problem.
However, on the assumption that you really do want to identify the tables that have been modified since some last known good point in time, you can query the INFORMATION_SCHEMA.TABLES table, which contains not only the CHECKSUM for every table in the RDBMS but also other potentially useful information like UPDATE_TIME. So, for example, to identify all tables changed in the last five minutes one could do:
SELECT TABLE_SCHEMA, TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE UPDATE_TIME > NOW() - INTERVAL 5 MINUTE
You could generate the CHECKSUM statements for all tables:
SELECT CONCAT('CHECKSUM TABLE ', table_name, ';') AS statement
FROM information_schema.tables
WHERE table_schema = 'YourDBNameHere'
Then copy this output and paste it into Workbench or whatever tool you need to use. If you need to do this from within application (e.g. PHP) code, then you would probably have to use pure dynamic MySQL.
For those who came here for an answer how to get checksum for all the tables in one query (as it was in my case):
SET group_concat_max_len = CAST(
(
SELECT SUM(LENGTH(TABLE_NAME)) + COUNT(*) * LENGTH(', ')
FROM information_schema.tables WHERE `TABLE_SCHEMA` = 'your_database_name'
) AS UNSIGNED
);
SET #sql_command:= (
SELECT CONCAT(
'CHECKSUM TABLE ',
GROUP_CONCAT( TABLE_NAME ORDER BY `TABLE_NAME` SEPARATOR ', ' )
)
FROM information_schema.tables
WHERE `TABLE_SCHEMA` = 'your_database_name'
ORDER BY `TABLE_NAME`
);
PREPARE statement FROM #sql_command;
EXECUTE statement;
DEALLOCATE PREPARE statement;
The mere idea is to create CHECKSUM TABLE statement which include all tables names in it. So yes, it is some sort of little bit upgraded version of answer given by Tim Biegeleisen.
First we set maximum permitted result lenght for GROUP_CONCAT() function (which is 1024 bytes by default). It is calculated as number of symbols in all table names inculding the separator which will be putted between these names:
SET group_concat_max_len = CAST(
(
SELECT SUM(LENGTH(TABLE_NAME)) + COUNT(*) * LENGTH(', ')
FROM information_schema.tables WHERE `TABLE_SCHEMA` = 'your_database_name'
) AS UNSIGNED
);
Then we put all the tables names together in one CHECKSUM TABLE statement and store it in string variable:
SET #sql_command:= (
SELECT CONCAT(
'CHECKSUM TABLE ',
GROUP_CONCAT( TABLE_NAME ORDER BY `TABLE_NAME` SEPARATOR ', ' )
)
FROM information_schema.tables
WHERE `TABLE_SCHEMA` = 'your_database_name'
ORDER BY `TABLE_NAME`
);
And finally executing the statement to see the results:
PREPARE statement FROM #sql_command;
EXECUTE statement;
DEALLOCATE PREPARE statement;
Unfortunately you can't further manipulate with result set using MySQL statements only (i.e. insert to table or join with other result sets).
So if you require to do some comparisons you will eventually need to use additional code in your favorite programming language (or use capable software) to accomplish the task.
The question does not state using a shell script to accomplish things isn't allowed, so I'll post one such approach here (PHP is able to invoke shell scripts - see http://php.net/manual/en/function.shell-exec.php - if safe mode is not enabled):
If your script has shell access at its disposal and a checksum tool - like md5sum - one can also do something like this to collect checksums for each table:
#!/bin/bash
DATABASEPATH="/var/lib/mysql/yourdatabase"
cd "$DATABASEPATH" &&
for TABLEFILE in `ls -t *.ibd`; do
SUMANDTABLE=`md5sum "$TABLEFILE"`
echo "${SUMANDTABLE//.ibd}"
done
And optionally, if you don't want a checksum calculated for all tables, you could also check if the modification date of the "$TABLEFILE" is within range. If not, you just exit the script (the ls -t orders by modification date, descending).
To access modification date use something like e.g. stat -c %Y "$TABLEFILE". This would give you the modification date in seconds since Epoch.
To access current date, also in seconds since Epoch use: date +%s.
One can then subtract the modification date from the current date to establish how many seconds ago a "$TABLEFILE" has changed.
Another related method, which in some cases could apply, would be to save the ls -t *.ibd listing (without even calculating checksums, just store filenames in order), then start an operation and at the end of that operation check for difference in file listing with another execution of ls -t *.ibd.

how to duplicate all the databases with limited rows in the tables

How can I duplicate my databases with limited number of rows in the tables.
Basically the duplicated db must have the same properties of original database but limited rows in the tables.
Try this, first create a similar table using
CREATE TABLE tbl_name_duplicate LIKE tlb_name;
then insert limited number of records into it using
INSERT INTO tbl_name_duplicate(SELECT * FROM tlb_name LIMIT 10);
to insert 10 records
Another approach, is to use the --where option in the mysqldump, so you could create something similar to a SQL query:
SELECT * FROM table_name WHERE id > (SELECT MAX(id) FROM table_name) - 10
re-written for the mysqldump (but you'll have to dump each table at a time, not the whole database):
mysqldump [options] --where="id > (SELECT MAX(id) FROM table_name) - 10" | mysql --host=host --user=user --password=password some_database
More information at MySQL Reference Guide.

MySQL: how to drop multiple tables using single query?

I want to drop multiple tables with ease without actually listing the table names in the drop query and the tables to be deleted have prefix say 'wp_'
I've used a query very similar to Angelin's. In case you have more than a few tables, one has to increase the max length of group_concat. Otherwise the query will barf on the truncated string that group_concat returns.
This is my 10 cents:
-- Increase memory to avoid truncating string, adjust according to your needs
SET group_concat_max_len = 1024 * 1024 * 10;
-- Generate drop command and assign to variable
SELECT CONCAT('DROP TABLE ',GROUP_CONCAT(CONCAT(table_schema,'.',table_name)),';') INTO #dropcmd FROM information_schema.tables WHERE table_schema='databasename' AND table_name LIKE 'my_table%';
-- Drop tables
PREPARE str FROM #dropcmd; EXECUTE str; DEALLOCATE PREPARE str;
Just sharing one of the solutions:
mysql> SELECT CONCAT(
"DROP TABLE ",
GROUP_CONCAT(TABLE_NAME)
) AS stmt
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = "your_db_name" AND TABLE_NAME LIKE "ur
condition" into outfile '/tmp/a.txt';
mysql> source /tmp/a.txt;
Simple solution without risk of error:
mysqldump create a file that contains DROP command like
DROP TABLE IF EXISTS `wp_matable`;
a 'grep' with "DROP TABLE wp_" give us the commands to execute
so drop is made by theses trhee lines (you can edit drop.sql to check which tables would be dropped before)
mysqldump -u user -p database > dump.sql
grep "DROP TABLE `wp_" dump.sql > drop.sql
mysql -u user -p database < drop.sql
Be careful with "_", need to be written with "\" before in Mysql like:
SELECT CONCAT('DROP TABLE',GROUP_CONCAT(CONCAT(table_schema,'.',table_name)),';') INTO #dropcmd FROM information_schema.tables WHERE table_schema='databasename' AND table_name LIKE '**my\\_table**%';
A less complicated solution when a large number of tables are needed to be deleted -
SELECT GROUP_CONCAT(table_name SEPARATOR ", ")
-> AS tables
-> FROM information_schema.tables
-> WHERE table_schema = "my_database_name"
-> AND table_name LIKE "wp_%";
+-------------------------------------------------------------------------
| tables
+-------------------------------------------------------------------------
| wp_t1, wp_t2, wp_t3, wp_t4, wp_t5, wp_t6, wp_t7, wp_t7, wp_ ..........
+-------------------------------------------------------------------------
Copy the table names. Then use -
DROP TABLE
-> wp_t1, wp_t2, wp_t3, wp_t4, wp_t5, wp_t6, wp_t7, wp_t7, wp_ ..........;
For the great mysqldump solution it's better to use the option --skip-quote-names
mysqldump --skip-quote-names -u user -p database > dump.sql
grep "DROP TABLE wp_" dump.sql > drop.sql
mysql -u user -p database < drop.sql
You get rid of backticks in table names. The grep part won't work in some enviroments with the backticks.
Go to c:\xampp\mysql\data\your folder
Select multiple tables that you want remove and then press delete button
Thanks
Dropping single table in mysql:
DROP TABLE TABLE_NAME;

Updating AUTO_INCREMENT value of all tables in a MySQL database

It is possbile set/reset the AUTO_INCREMENT value of a MySQL table via
ALTER TABLE some_table AUTO_INCREMENT = 1000
However I need to set the AUTO_INCREMENTupon its existing value (to fix M-M replication), something like:
ALTER TABLE some_table SET AUTO_INCREMENT = AUTO_INCREMENT + 1 which is not working
Well actually, I would like to run this query for all tables within a database. But actually this is not very crucial.
I could not find out a way to deal with this problem, except running the queries manually. Will you please suggest something or point me out to some ideas.
Thanks
Using:
ALTER TABLE some_table AUTO_INCREMENT = 0
...will reset the auto_increment value to be the next value based on the highest existing value in the auto_increment column.
To run this over all the tables, you'll need to use MySQL's dynamic SQL syntax called PreparedStatements because you can't supply the table name for an ALTER TABLE statement as a variable. You'll have to loop over the output from:
SELECT t.table_name
FROM INFORMATION_SCHEMA.TABLES t
WHERE t.table_schema = 'your_database_name'
...running the ALTER TABLE statement above for each table.
set #db = 'your_db_name';
SELECT concat('ALTER TABLE ', #db, '.', TABLE_NAME, ' AUTO_INCREMENT = 0;')
FROM information_schema.TABLES WHERE TABLE_SCHEMA = #db AND TABLE_TYPE = 'BASE TABLE'
Then copy-paste and run the output you get.
In the below instructions you will need to replace everything that is in [brackets] with your correct value. BACKUP BEFORE ATTEMPTING.
If you can login to mysql as root through the command line then you could do the following to reset the auto_increment on all tables, first we will construct our queries which we want to run:
Make a database backup:
mysqldump -u [uname] -p [dbname] | gzip -9 > [backupfile.sql.gz]
Login:
mysql -u root -p
Set the group_concat_max_length to a higher value so our list of queries doesn't get truncated:
SET group_concat_max_len=100000;
Create our list of queries by using the following:
SELECT GROUP_CONCAT(CONCAT("ALTER TABLE ", table_name, " AUTO_INCREMENT = 0") SEPARATOR ";") FROM information_schema.tables WHERE table_schema = "[DATABASENAME]";
Then you will receive a long string of mysql queries followed by a bunch of dashes. Copy the string of queries to your clipboard, it will look something similar to:
ALTER table1 AUTO_INCREMENT = 0;ALTER table2 AUTO_INCREMENT = 0;...continued...
Change to the database you would like to run the command on:
USE [DATABASENAME];
Then paste the string that is on your clipboard and hit enter to run it. This should run the alter on every table in your database.
Messed up? Restore from your backup, be sure to logout of mysql before running the following (just type exit; to do so)
gzip -d < [backupfile.sql.gz] | mysql -u [uname] -p [dbname]
I will not take responsibility for any damage cause by your use of any of these commands, use at your own risk.
I found this gist on github and it worked like a charm for me: https://gist.github.com/abhinavlal/4571478
The command:
mysql -Nsr -e "SELECT t.table_name FROM INFORMATION_SCHEMA.TABLES t WHERE t.table_schema = 'DB_NAME'" | xargs -I {} mysql DB_NAME -e "ALTER TABLE {} AUTO_INCREMENT = 1;"
If your DB requires a password, you unfortunately have to put that in the command for it to work. One work-around (still not great but works) is to put the password in a secure file. You can always delete the file after so the password doesn't stay in your command history:
... | xargs -I {} mysql -u root -p`cat /path/to/pw.txt` DB_NAME -e...
Assuming that you must fix this by amending the auto-increment column rather than the foreign keys in the table decomposing the N:M relationship, and that you can predict what the right values are, try using a temporary table where the relevant column is not auto-increment, then map this back in place of the original table and change the column type to auto-increment afterwards, or truncate the original table and load the data from the temp table.
I have written below procedure change the database name and execute the procedure
CREATE DEFINER=`root`#`localhost` PROCEDURE `setAutoIncrement`()
BEGIN
DECLARE done int default false;
DECLARE table_name CHAR(255);
DECLARE cur1 cursor for SELECT t.table_name FROM INFORMATION_SCHEMA.TABLES t
WHERE t.table_schema = "buzzer_verifone";
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
open cur1;
myloop: loop
fetch cur1 into table_name;
if done then
leave myloop;
end if;
set #sql = CONCAT('ALTER TABLE ',table_name, ' AUTO_INCREMENT = 1');
prepare stmt from #sql;
execute stmt;
drop prepare stmt;
end loop;
close cur1;
END
Execute the procedure above using below line
Call setAutoIncrement();
The Quickest solution to Update/Reset AUTO_INCREMENT in MySQL Database
Ensure that the AUTO_INCREMENT column has not been used as a FOREIGN_KEY on another table.
Firstly, Drop the AUTO_INCREMENT COLUMN as:
ALTER TABLE table_name DROP column_name
Example: ALTER TABLE payments DROP payment_id
Then afterward re-add the column, and move it as the first column in the table
ALTER TABLE table_name ADD column_name DATATYPE AUTO_INCREMENT PRIMARY KEY FIRST
Example: ALTER TABLE payments ADD payment_id INT AUTO_INCREMENT PRIMARY KEY FIRST
Reset mysql table auto increment was very easy, we can do it with single query, please see this http://webobserve.blogspot.com/2011/02/reset-mysql-table-autoincrement.html.

Query to find tables modified in the last hour

I want to find out which tables have been modified in the last hour in a MySQL database. How can I do this?
MySQL 5.x can do this via the INFORMATION_SCHEMA database. This database contains information about tables, views, columns, etc.
SELECT *
FROM `INFORMATION_SCHEMA`.`TABLES`
WHERE
DATE_SUB(NOW(), INTERVAL 1 HOUR) < `UPDATE_TIME`
Returns all tables that have been updated (UPDATE_TIME) in the last hour. You can also filter by database name (TABLE_SCHEMA column).
An example query:
SELECT
CONCAT(`TABLE_SCHEMA`, '.', `TABLE_NAME`) AS `Table`,
UPDATE_TIME AS `Updated`
FROM `INFORMATION_SCHEMA`.`TABLES`
WHERE
DATE_SUB(NOW(), INTERVAL 3 DAY) < `UPDATE_TIME`
AND `TABLE_SCHEMA` != 'INFORMATION_SCHEMA'
AND `TABLE_TYPE` = 'BASE TABLE';
For each table you want to detect change, you need to have a column that holds the last change's timestamp.
For every insert or update in the table, you need to update that column with the current date and time.
Alternatively, you can set up a trigger which updates the column automatically on each insert or modify. That way you don't have to modify all of your query.
Once this works, to find out if rows from a table have been modified in the last hour, perform the query
select count(*) from mytable where datemod>subtime(now(),'1:0:0')
Repeat for every table you want to check.
InnoDB still currently lacks a native mechanism to retreive this information. In the related feature request at MySQL, someone advises to set AFTER [all events] triggers on each table to be monitored. The trigger would issue a statement such as
INSERT INTO last_update VALUE ('current_table_name', NOW())
ON DUPLICATE KEY UPDATE update_time = NOW();
in a table like this:
CREATE TABLE last_update (
table_name VARCHAR(64) PRIMARY KEY,
update_time DATETIME
) ENGINE = MyISAM; -- no need for transactions here
Alternatively, if a slight inaccuracy in this data (in the range of one second) is acceptable, and if you have read access to the MySQL data files, you could switch to a setting where inndb_files_per_table = ON (recommended in any case) and check the last modification time of the underlying data files.
These files are found under /var/lib/mysql/[database_name]/*.ibd in most default installations.
Please note, if you decide to take this route, you need to recreate existing tables for the new setting to apply.
I have answered a question like this in the DBA StackExchange about 1.5 years ago: Fastest way to check if InnoDB table has changed.
Based on that old answer, I recommend the following
Flushing Writes to Disk
This is a one-time setup. You need to set innodb_max_dirty_pages_pct to 0.
First, add this to /etc/my.cnf
[mysqld]
innodb_max_dirty_pages_pct=0
Then, run this to avoid having to restart mysql:
mysql> SET GLOBAL innodb_max_dirty_pages_pct = 0;
Get Timestamp of InnoDB table's .ibd file
ls has the option to retrieve the UNIX timestamp in Seconds. For an InnoDB table mydb.mytable
$ cd /var/lib/mysql/mydb
$ ls -l --time-style="+%s" mytable.ibd | awk '{print $6}'
You can then compute UNIX_TIMESTAMP(NOW()) - (timestamp of the .ibd file) and see if it is 3600 or less.
Give it a Try !!!
SELECT *
FROM information_schema.tables
WHERE UPDATE_TIME >= SYSDATE() - INTERVAL 1 DAY && TABLE_TYPE != 'SYSTEM VIEW'
SELECT *
FROM information_schema.tables
WHERE UPDATE_TIME >= DATE_SUB(CURDATE(), INTERVAL 1 DAY) && TABLE_TYPE != 'SYSTEM VIEW'