How can I tell when a MySQL table was last updated? - mysql

In the footer of my page, I would like to add something like "last updated the xx/xx/200x" with this date being the last time a certain mySQL table has been updated.
What is the best way to do that? Is there a function to retrieve the last updated date? Should I access to the database every time I need this value?

In later versions of MySQL you can use the information_schema database to tell you when another table was updated:
SELECT UPDATE_TIME
FROM information_schema.tables
WHERE TABLE_SCHEMA = 'dbname'
AND TABLE_NAME = 'tabname'
This does of course mean opening a connection to the database.
An alternative option would be to "touch" a particular file whenever the MySQL table is updated:
On database updates:
Open your timestamp file in O_RDRW mode
close it again
or alternatively
use touch(), the PHP equivalent of the utimes() function, to change the file timestamp.
On page display:
use stat() to read back the file modification time.

I'm surprised no one has suggested tracking last update time per row:
mysql> CREATE TABLE foo (
id INT PRIMARY KEY
x INT,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
ON UPDATE CURRENT_TIMESTAMP,
KEY (updated_at)
);
mysql> INSERT INTO foo VALUES (1, NOW() - INTERVAL 3 DAY), (2, NOW());
mysql> SELECT * FROM foo;
+----+------+---------------------+
| id | x | updated_at |
+----+------+---------------------+
| 1 | NULL | 2013-08-18 03:26:28 |
| 2 | NULL | 2013-08-21 03:26:28 |
+----+------+---------------------+
mysql> UPDATE foo SET x = 1234 WHERE id = 1;
This updates the timestamp even though we didn't mention it in the UPDATE.
mysql> SELECT * FROM foo;
+----+------+---------------------+
| id | x | updated_at |
+----+------+---------------------+
| 1 | 1235 | 2013-08-21 03:30:20 | <-- this row has been updated
| 2 | NULL | 2013-08-21 03:26:28 |
+----+------+---------------------+
Now you can query for the MAX():
mysql> SELECT MAX(updated_at) FROM foo;
+---------------------+
| MAX(updated_at) |
+---------------------+
| 2013-08-21 03:30:20 |
+---------------------+
Admittedly, this requires more storage (4 bytes per row for TIMESTAMP).
But this works for InnoDB tables before 5.7.15 version of MySQL, which INFORMATION_SCHEMA.TABLES.UPDATE_TIME doesn't.

I don't have information_schema database, using mysql version 4.1.16, so in this case you can query this:
SHOW TABLE STATUS FROM your_database LIKE 'your_table';
It will return these columns:
| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment |
As you can see there is a column called: "Update_time" that shows you the last update time for your_table.

The simplest thing would be to check the timestamp of the table files on the disk. For example, You can check under your data directory
cd /var/lib/mysql/<mydatabase>
ls -lhtr *.ibd
This should give you the list of all tables with the table when it was last modified the oldest time, first.

For a list of recent table changes use this:
SELECT UPDATE_TIME, TABLE_SCHEMA, TABLE_NAME
FROM information_schema.tables
ORDER BY UPDATE_TIME DESC, TABLE_SCHEMA, TABLE_NAME

I would create a trigger that catches all updates/inserts/deletes and write timestamp in custom table, something like
tablename | timestamp
Just because I don't like the idea to read internal system tables of db server directly

Although there is an accepted answer I don't feel that it is the right one. It is the simplest way to achieve what is needed, but even if already enabled in InnoDB (actually docs tell you that you still should get NULL ...), if you read MySQL docs, even in current version (8.0) using UPDATE_TIME is not the right option, because:
Timestamps are not persisted when the server is restarted or when the
table is evicted from the InnoDB data dictionary cache.
If I understand correctly (can't verify it on a server right now), timestamp gets reset after server restart.
As for real (and, well, costly) solutions, you have Bill Karwin's solution with CURRENT_TIMESTAMP and I'd like to propose a different one, that is based on triggers (I'm using that one).
You start by creating a separate table (or maybe you have some other table that can be used for this purpose) which will work like a storage for global variables (here timestamps). You need to store two fields - table name (or whatever value you'd like to keep here as table id) and timestamp. After you have it, you should initialize it with this table id + starting date (NOW() is a good choice :) ).
Now, you move to tables you want to observe and add triggers AFTER INSERT/UPDATE/DELETE with this or similar procedure:
CREATE PROCEDURE `timestamp_update` ()
BEGIN
UPDATE `SCHEMA_NAME`.`TIMESTAMPS_TABLE_NAME`
SET `timestamp_column`=DATE_FORMAT(NOW(), '%Y-%m-%d %T')
WHERE `table_name_column`='TABLE_NAME';
END

OS level analysis:
Find where the DB is stored on disk:
grep datadir /etc/my.cnf
datadir=/var/lib/mysql
Check for most recent modifications
cd /var/lib/mysql/{db_name}
ls -lrt
Should work on all database types.

a) It will show you all tables and there last update dates
SHOW TABLE STATUS FROM db_name;
then, you can further ask for specific table:
SHOW TABLE STATUS FROM db_name like 'table_name';
b) As in above examples you cannot use sorting on 'Update_time' but using SELECT you can:
SELECT * FROM information_schema.tables WHERE TABLE_SCHEMA='db_name' ORDER BY UPDATE_TIME DESC;
to further ask about particular table:
SELECT * FROM information_schema.tables WHERE TABLE_SCHEMA='db_name' AND table_name='table_name' ORDER BY UPDATE_TIME DESC';

I got this to work locally, but not on my shared host for my public website (rights issue I think).
SELECT last_update FROM mysql.innodb_table_stats WHERE table_name = 'yourTblName';
'2020-10-09 08:25:10'
MySQL 5.7.20-log on Win 8.1

Just grab the file date modified from file system. In my language that is:
tbl_updated = file.update_time(
"C:\ProgramData\MySQL\MySQL Server 5.5\data\mydb\person.frm")
Output:
1/25/2013 06:04:10 AM

If you are running Linux you can use inotify to look at the table or the database directory. inotify is available from PHP, node.js, perl and I suspect most other languages. Of course you must have installed inotify or had your ISP install it. A lot of ISP will not.

Not sure if this would be of any interest. Using mysqlproxy in between mysql and clients, and making use of a lua script to update a key value in memcached according to interesting table changes UPDATE,DELETE,INSERT was the solution which I did quite recently. If the wrapper supported hooks or triggers in php, this could have been eaiser. None of the wrappers as of now does this.

i made a column by name : update-at in phpMyAdmin and got the current time from Date() method in my code (nodejs) . with every change in table this column hold the time of changes.

Same as others, but with some conditions i've used, to save time:
SELECT
UPDATE_TIME,
TABLE_SCHEMA,
TABLE_NAME
FROM
information_schema.tables
WHERE
1 = 1
AND UPDATE_TIME > '2021-11-09 00:00:00'
AND TABLE_SCHEMA = 'db_name_here'
AND TABLE_NAME not in ('table_name_here',)
ORDER BY
UPDATE_TIME DESC,
TABLE_SCHEMA,
TABLE_NAME;

This is what I did, I hope it helps.
<?php
mysql_connect("localhost", "USER", "PASSWORD") or die(mysql_error());
mysql_select_db("information_schema") or die(mysql_error());
$query1 = "SELECT `UPDATE_TIME` FROM `TABLES` WHERE
`TABLE_SCHEMA` LIKE 'DataBaseName' AND `TABLE_NAME` LIKE 'TableName'";
$result1 = mysql_query($query1) or die(mysql_error());
while($row = mysql_fetch_array($result1)) {
echo "<strong>1r tr.: </strong>".$row['UPDATE_TIME'];
}
?>

Cache the query in a global variable when it is not available.
Create a webpage to force the cache to be reloaded when you update it.
Add a call to the reloading page into your deployment scripts.

Related

Process TEXT BLOBs fields in MySQL line by line

I have a MEDIUMTEXT blob in a table, which contains paths, separated by new line characters. I'd like to add a "/" to the begging of each line if it is not already there. Is there a way to write a query to do this with built-in procedures?
I suppose an alternative would be to write a Python script to get the field, convert to a List, process each line and update the record. There aren't that many records in the DB, so I can take the processing delay (if it doesn't lock the entire DB or table). About 8K+ rows.
Either way would be fine. If second option is recommended, do I need to know of specific locking schematics before getting into this -- as this would be run on a live prod DB (of course, I'd take a DB snapshot). But in place updates would be best to not have downtime.
Demo:
mysql> create table mytable (id int primary key, t text );
mysql> insert into mytable values (1, 'path1\npath2\npath3');
mysql> select * from mytable;
+----+-------------------+
| id | t |
+----+-------------------+
| 1 | path1
path2
path3 |
+----+-------------------+
1 row in set (0.00 sec)
mysql> update mytable set t = concat('/', replace(t, '\n', '\n/'));
mysql> select * from mytable;
+----+----------------------+
| id | t |
+----+----------------------+
| 1 | /path1
/path2
/path3 |
+----+----------------------+
However, I would strongly recommend to store each path on its own row, so you don't have to think about this. In SQL, each column should store one value per row, not a set of values.

Wrong auto_increment value on select

I'm running MySQL 8 and whenever I run
SELECT AUTO_INCREMENT
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'test'
AND TABLE_NAME = 'table';
I get the wrong auto_increment value. Straightforward exemple :
ALTER TABLE test.lieux auto_increment = 6;
SELECT AUTO_INCREMENT
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'test'
AND TABLE_NAME = 'lieux';
Return AUTO_INCREMENT of 4.
I even tried inserting a row after my auto_increment altering, it was indeed inserted with a PK value of 6, but the SELECT statement still returned me an A_I value of 4.
Is there something wrong with my schema or did I misunderstand the SELECT AUTO_INCREMENT statement ?
This happens because the table statistics are cached beginning with Mysql 8.
To see the current value of the cache, execute -
show variables like 'information_schema_stats_expiry'
/* output (For mysql 8+, default cache is 86400 seconds = 1 day) */
+---------------------------------+-------+
| Variable_name | Value |
+---------------------------------+-------+
| information_schema_stats_expiry | 86400 |
+---------------------------------+-------+
To get the latest AUTO_INCREMENT value, you should update the expiry time to bypass the cache.
There are 2 ways you can do this.
1. For the current session
To update the cache value for the current session alone, as suggested in the comments, execute
SET ##SESSION.information_schema_stats_expiry = 0;
2. Globally
If you wish to disable the cache altogether, use
SET PERSIST information_schema_stats_expiry = 0
IMO, a default cache of 1 day is an overkill and totally unneeded. A cache of about 5 mins (300s) should generally suffice.
Offical Docs

Sorting order behaviour between Postgres and Mysql

I have faced some strange sort order behaviour between Postgres & mysql.
For example, i have created simple table with varchar column and inserted two records as below in both Postgres and Mysql.
create table mytable(name varchar(100));
insert into mytable values ('aaaa'), ('aa_a');
Now, i have executed simple select query with order by column.
Postgres sort order:
test=# select * from mytable order by (name) asc;
name
------
aa_a
aaaa
(2 rows)
Mysql sort order:
mysql> select * from mytable order by name asc;
+------+
| name |
+------+
| aaaa |
| aa_a |
+------+
2 rows in set (0.00 sec)
Postgres and mysql both returning same records with different order.
My question is which one correct?
How to get results in same order in both database?
Edited:
I tried with query with ORDER BY COLLATE, it solved my problem.
Tried like this
mysql> select * from t order by name COLLATE utf8_bin;
+------+
| name |
+------+
| aa_a |
| aaaa |
+------+
3 rows in set (0.00 sec)
Thanks.
There is no "correct" way to sort data.
You need to read up on "locales".
Different locales will provide (among other things) different sort orders. You might have a database using ISO-8859-1 or UTF-8 which can represent several different languages. Rules for sorting English will be different for those from French or German.
PostgreSQL uses the underlying operating-system's support for locales, and not all locales are available on all platforms. The alternative is to provide your own support, but then you can have incompatibilities within one machine.
I believe MySQL takes the second option, but I'm no expert on MySQL.

Mysql group by returns wrong result when using indexes

I have an issue in one of mySql query. Issue is not reproducible on any of our local machines.
I have a simple query
SELECT ID
FROM TABLE_NAME
WHERE ID IN (15920,15921)
GROUP BY ID
returns result –
ID
15920
Which is unexpected result since there is data for both the ids in database.
Using explain command returned the following result for this query
+----+-------------+------------+-------+--------------------+--------------------+---------+-----+------+---------------------------------------+
| id | select_type | table | type | possible_keys | Key | key_len | Ref | rows | Extra |
+----+-------------+------------+-------+--------------------+--------------------+---------+-----+------+---------------------------------------+
| 1 | SIMPLE | TABLE_NAME | range | CUST_SID_SRUN_INDX | CUST_SID_SRUN_INDX | 4 | | 1 | Using where; Using index for group-by |
+----+-------------+------------+-------+--------------------+--------------------+---------+-----+------+---------------------------------------+
For this issue I have tried following solutions -
• Forcing a derived table –
SELECT ID
FROM (SELECT ID
FROM TABLE_NAME
WHERE ID IN (15920,15921)) CUST
GROUP BY ID
• Using having clause instead of where clause
SELECT ID
FROM TABLE_NAME
GROUP BY ID
HAVING ID IN (15920,15921)
• Ignoring the index used in this table –
SELECT ID
FROM TABLE_NAME IGNORE INDEX (CUST_SID_SRUN_INDX)
WHERE ID IN (15920,15921)
GROUP BY ID
All the above queries return the expected result as follow :-
ID
15920
15921
I am trying to analyze the unexpected behavior of group by clause when indexes are used. Please let me know if I could try something else.
FYI…The UAT box where the issue occurs is a linux machine with Mysql 5.1.30. The difference that we see is the version of Mysql. We are using Mysql 5.1.52 on our machines.
The table which has this issue uses MyISAM databse engine.
Please let me know if any other input is required.
Thanks everyone for your help .
There is a issue in MySql 5.1.30 with MyISAM partitions, after banging my head for several days I have resolved the issue by upgrading MySQL to 5.1.52 version or reorganizing the partitions .
For your reference see the following bugs reported on MySQL forum :
http://bugs.mysql.com/bug.php?id=44821
If your query results are wrong using indexes, it is possible that your table is corrupt for some reason. Try:
REPAIR TABLE table_name;
or try backup the data, destroy and re-create the table and repopulate the data into the table from your backup (that will also recreate the indexes)
Visit MySQL Reference Manual - rebuilding tables

Total run time of multiple queries in mysql

I have some benchmark queries in a .sql file. If i use source in mysql to execute them, mysql will show run time after each query. And there are pages and pages query outputs. Is there any way that I can obtain the total run time of all queries?
Thanks a lot!
You can use MySQL's built in profiling support by adding this line to your my.cnf:
SET profiling=1;
This allows you to easily see the time it took for each query:
mysql> SHOW PROFILES;
+----------+-------------+-------------------------------------------------------------------+
| Query_ID | Duration | Query |
+----------+-------------+-------------------------------------------------------------------+
| 1 | 0.33174700 | SELECT COUNT(*) FROM myTable WHERE extra LIKE '%zkddj%' |
| 2 | 0.00036600 | SELECT COUNT(id) FROM myTable |
| 3 | 0.00087700 | CREATE TEMPORARY TABLE foo LIKE myTable |
| 4 | 33.52952000 | INSERT INTO foo SELECT * FROM myTable |
| 5 | 0.06431200 | DROP TEMPORARY TABLE foo |
+----------+-------------+-------------------------------------------------------------------+
5 rows in set (0.00 sec)
You can them sum up the times to get the total time:
SELECT SUM(Duration) from information_schema.profiling;
You can find more details on MySQL's profiling here.
Another approach you could take it to run execute the SQL queries from command line and use the Unix time command to time the execution. This may, however, not give you the most precise time though. Additionally, it won't give you a breakdown of how long each query took unless you use it in combination with MySQL profiling.
you could modify your .sql to create a begin and end row with a timestamp and then subtract without having to bring out an excel spreadsheet and add it up.
Thanks for all the suggestions.
I end up created another table just to record the start and end time of each query in the .sql file.
I edited the .sql file, add an insert statement after each original query just to record the time. At the end, I can query this "time" table for profiling the .sql file execution.