Wrong auto_increment value on select - mysql

I'm running MySQL 8 and whenever I run
SELECT AUTO_INCREMENT
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'test'
AND TABLE_NAME = 'table';
I get the wrong auto_increment value. Straightforward exemple :
ALTER TABLE test.lieux auto_increment = 6;
SELECT AUTO_INCREMENT
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'test'
AND TABLE_NAME = 'lieux';
Return AUTO_INCREMENT of 4.
I even tried inserting a row after my auto_increment altering, it was indeed inserted with a PK value of 6, but the SELECT statement still returned me an A_I value of 4.
Is there something wrong with my schema or did I misunderstand the SELECT AUTO_INCREMENT statement ?

This happens because the table statistics are cached beginning with Mysql 8.
To see the current value of the cache, execute -
show variables like 'information_schema_stats_expiry'
/* output (For mysql 8+, default cache is 86400 seconds = 1 day) */
+---------------------------------+-------+
| Variable_name | Value |
+---------------------------------+-------+
| information_schema_stats_expiry | 86400 |
+---------------------------------+-------+
To get the latest AUTO_INCREMENT value, you should update the expiry time to bypass the cache.
There are 2 ways you can do this.
1. For the current session
To update the cache value for the current session alone, as suggested in the comments, execute
SET ##SESSION.information_schema_stats_expiry = 0;
2. Globally
If you wish to disable the cache altogether, use
SET PERSIST information_schema_stats_expiry = 0
IMO, a default cache of 1 day is an overkill and totally unneeded. A cache of about 5 mins (300s) should generally suffice.
Offical Docs

Related

MySQL return extra records when using a long type number to filter varchar type

A simple table:
CREATE TABLE `tbl_type_test` (
`uid` varchar(31) NOT NULL DEFAULT '0',
`value` varchar(15) NOT NULL DEFAULT '',
PRIMARY KEY (`uid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
and some records:
'6011656821812318924', 'a'
'6011656821812318925', 'b'
'6011656821812318926', 'c'
when I execute the following SQL, 3 records will return
select * from tbl_type_test where uid = 6011656821812318924;
and this will return 3 records, too. Weird.
select * from tbl_type_test where uid = 6011656821812318900;
if I change the number to string type, as expected, only 1 record will return:
select * from tbl_type_test where uid = '6011656821812318924';
I think the number type and length in the query is the reason, but I don't known the exact.
Any comment will be greatly appreciated.
In all other cases, the arguments are compared as floating-point (real) numbers. - https://dev.mysql.com/doc/refman/5.7/en/type-conversion.html
for example
drop procedure if exists p;
delimiter $$
create procedure p (inval float, inval2 float, inval3 float)
select inval,inval2,inval3;
call p(6011656821812318924,6011656821812318925,6011656821812318926);
+------------+------------+------------+
| inval | inval2 | inval3 |
+------------+------------+------------+
| 6.01166e18 | 6.01166e18 | 6.01166e18 |
+------------+------------+------------+
1 row in set (0.00 sec)
MySQL by default treats 1 and '1' the same however you can change that by setting the MySQL behavior to Strict mode.
set ##GLOBAL.sql_mode = "STRICT_ALL_TABLES";
set ##SESSION.sql_mode = "STRICT_ALL_TABLES";
or you can set these variables in your my.cnf file to be permanent in sql_mode = ''. This way MySQL will throw an error if an incorrect type is used. Read
https://dev.mysql.com/doc/refman/5.7/en/constraint-invalid-data.html
Regards

Mysql repeatable read get other session's commit when use select ...for update

My table's definition is
CREATE TABLE auto_inc (
id int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
At first there are four rows:
| id |
| 1 |
| 2 |
| 3 |
| 4 |
I opened session 1 and executed
#session 1
set transaction isolation level REPEATABLE READ
start transaction;
select * from auto_inc
return four rows 1,2,3,4.And then I opened another session 2 and executed
#session 2
insert into auto_inc(`id`) values(null)
and insert success.Back to session 1 I executed
#session 1
select * from auto_inc;#command 1
select * from auto_inc for update;#command 2
command 1 return four rows 1,2,3,4.But command 2 return 1,2,3,4,5.Could anyone gives me some clues why command 2 will see session 2's insertion?
Thanks in advance!
why session 2 can insert new data ?
under REPEATABLE READ the second SELECT is guaranteed to see the rows that has seen at first select unchanged. New rows may be added by a concurrent transaction, but the existing rows cannot be deleted nor changed.
https://stackoverflow.com/a/4036063/3020810
why session 1 can see the insertion?
under REPEATABLE READ, Consistent reads within the same transaction read the snapshot established by the first read.If you want to see the “freshest” state of the database, use either the READ COMMITTED isolation level or a locking read, and a select ... for update is a locking read.
Consistent Nonlocking Reads: https://dev.mysql.com/doc/refman/5.6/en/innodb-consistent-read.html

Very slow query in ORDER BY with larger LIMIT range

MySQL 5.6, 64-bit, RHEL 5.8
A query on a large table with ORDER BY and LIMIT 'row_count' (or LIMIT 0,'row_count'). If the 'row_count' is larger then real count of result set, will be very very slow.
case 1: The query below is very fast (No 'LIMIT'):
mysql> SELECT * FROM syslog WHERE
(ReportedTime BETWEEN '2013-11-04' AND '2013-11-05') AND
Priority<3 AND Facility=1 ORDER BY id DESC;
+---
| ...
6 rows in set (0.01 sec)
case 2: The query below is also fast ('LIMIT 5'):
mysql> SELECT * FROM syslog WHERE
(ReportedTime BETWEEN '2013-11-04' AND '2013-11-05') AND
Priority<3 AND Facility=1 ORDER BY id DESC LIMIT 5;
+---
| ...
5 rows in set (0.42 sec)
case 3: The query below is very very slow ('LIMIT 7', may use any 'row_count' value > 6):
mysql> SELECT * FROM syslog WHERE
(ReportedTime BETWEEN '2013-11-04' AND '2013-11-05') AND
Priority<3 AND Facility=1 ORDER BY id DESC LIMIT 7;
+---
| ...
6 rows in set (28 min 7.24 sec)
Difference is just only individual (No LIMIT), "LIMIT 5", and "LIMIT 7".
Why is case 3 so slow?
Some investigations in the case 3:
Run command 'SHOW PROCESS', the State of the query is kept in 'Sending data'
Checked the server memory, it's still available enough.
Extended SESSION buffers 'read_buffer_size','read_rnd_buffer_size','sort_buffer_size' to very large amount (to 16MB) right before running query, but no help.
Also query only the column 'id' (SELECT id FROM syslog ....), but the same result.
During the query is running, raised the same query but with row_count<5 (eg. 'LIMIT 5') in another mysql connection, the return of latter is still very soon.
With different condition, for example, extend the time range BETWEEN '2013-10-03' to '2013-11-05' to gain result row count 149. With LIMIT 140, it's fast. With LIMIT 150, it's very very slow. So strange.
Currently in practice, in our website, the program gets the real result row count first (SELECT COUNT(*) FROM ..., No ORDER BY, No LIMIT), and afterwards do the query with the LIMIT 'row_count' value not exceeding the real row count got just now. Ugly.
The EXPLAIN for case 3:
-+-----..-+----..+-------+-----..+--------+---------+-----+-----+------------+
| sele.. | table| type | poss..| key | key_len | ref | rows| Extra |
-+-----..-+----..+-------+-----..+--------+---------+-----+-----+------------+
| SIMPLE | syslo| index | ... | PRIMARY| 8 | NULL| 132 | Using where|
-+-----..-+----..+-------+-----..+--------+---------+-----+-----+------------+
1 row in set (0.00 sec)
Table definition:
CREATE TABLE syslog (
id BIGINT NOT NULL AUTO_INCREMENT,
ReceivedAt TIMESTAMP NOT NULL DEFAULT 0,
ReportedTime TIMESTAMP NOT NULL DEFAULT 0,
Priority SMALLINT,
Facility SMALLINT,
FromHost VARCHAR(60),
Message TEXT,
InfoUnitID INT NOT NULL DEFAULT 0,
SysLogTag VARCHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY idx_ReportedTime_Priority_id (ReportedTime,Priority,id),
KEY idx_Facility (Facility),
KEY idx_SysLogTag (SysLogTag(16)),
KEY idx_FromHost (FromHost(16))
);
Mysql is famous for its behaviour around ORDER BY DESC + LIMIT clause.
See: http://www.mysqlperformanceblog.com/2006/09/01/order-by-limit-performance-optimization/
Please try:
SELECT *
FROM syslog FORCE INDEX (Facility)
WHERE
ReportedTime BETWEEN '2013-11-04' AND '2013-11-05'
AND Priority<3
AND Facility=1
ORDER BY id DESC
LIMIT 7;
You need to force the use of the index used in first queries. (get it from their explain plans, KEY column)

How can I make MySQL as fast as a flat file in this scenario?

Assume a key-value table with at least 10s of millions of rows.
Define an operation that takes a large number of IDs (again, 10s of millions) finds the corresponding values and sums them.
Using a database, this operation seems like it can approach (disk seek time) * (number of lookups).
Using a flat file, and reading through the entire contents, this operation will approach (file size)/(drive transfer rate).
Plugging in some (rough) values (from wikipedia and/or experimentation):
seek time = 0.5ms
transfer rate = 64MByte/s
file size = 800M (for 70 million int/double key/values)
65 million value lookups
DB time = 0.5ms * 65000000 = 32500s = 9 hours
Flat file = 800M/(64MB/s) = 12s
Experimental results are not as bad for MySQL, but the flat file still wins.
Experiments:
Create InnoDB and MyISAM id/value pair tables. e.g.
CREATE TABLE `ivi` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`val` double DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB
Fill with 32 million rows of data of your choice. Query with:
select sum(val) from ivm where id not in (1,12,121,1121); //be sure to change the numbers each time or clear the query cache
Use the following code to create & read key/value flat file from java.
private static void writeData() throws IOException {
long t = -System.currentTimeMillis();
File dat = new File("/home/mark/dat2");
if (dat.exists()){
dat.delete();
}
FileOutputStream fos = new FileOutputStream(dat);
ObjectOutputStream os = new ObjectOutputStream(new BufferedOutputStream(fos));
for (int i=0; i< 32000000; i++){
os.writeInt(i);
os.writeDouble(i / 2.0);
}
os.flush();
os.close();
t += System.currentTimeMillis();
System.out.println("time ms = " + t);
}
private static void performSummationQuery() throws IOException{
long t = -System.currentTimeMillis();
File dat = new File("/home/mark/dat2");
FileInputStream fin = new FileInputStream(dat);
ObjectInputStream in = new ObjectInputStream(new BufferedInputStream(fin));
HashSet<Integer> set = new HashSet<Integer>(Arrays.asList(11, 101, 1001, 10001, 100001));
int i;
double d;
double sum = 0;
try {
while (true){
i = in.readInt();
d = in.readDouble();
if (!set.contains(i)){
sum += d;
}
}
} catch (EOFException e) {
}
System.out.println("sum = " + sum);
t += System.currentTimeMillis();
System.out.println("time ms = " + t);
}
RESULTS:
InnoDB 8.0-8.1s
MyISAM 3.1-16.5s
Stored proc 80-90s
FlatFile 1.6-2.4s (even after: echo 3 > /proc/sys/vm/drop_caches)
My experiments have shown that a flat file wins against the database here. Unfortunately, I sill need to do "standard" CRUD operations on this table. But this is the use pattern that's killing me.
So what's the best way I can have MySQL behave like itself most of the time, yet win over a flat file in the above scenario?
EDIT:
To clarify some points:
1. I have dozens such tables, some will have hundreds of millions of rows and I and cannot store them all in RAM.
2. The case I have described is what I need to support. The values associated to an ID might change, and the selection of IDs is ad-hoc. Therefor there is no way to pre-generate & cache any sums. I need to do the work of "find each value and sum them all" every time.
Thanks.
Your numbers assume that MySQL will perform disk I/O 100% of the time while in practice that is rarely the case. If your MySQL server has enough RAM and your table is indexed appropriately your cache hit rate will rapidly approach 100% and MySQL will perform very little disk I/O as a direct result of your sum operation. If you are frequently having to deal with calculations across 10,000,000 rows you may also consider adjusting your schema to reflect real-world usage (keeping a "cached" sum on hand isn't always a bad idea depending on your specific needs).
I highly recommend you put together a test database, throw in 10s millions of test rows, and run some real queries in MySQL to determine how the system will perform. Spending 15 minutes doing this would give you far more accurate information.
Telling MySQL to ignore the primary (and only) index speeds both queries up.
For InnoDB it saves a second the queries. On MyISAM it keeps the query time consistently at the minimum time seen.
The cange is to add
ignore index(`PRIMARY`)
after the tablename in the query.
EDIT:
I appreciate all the input but much of it was of the form "you shouldn't do this", "do something completely different", etc. None of it addressed the question at hand:
"So what's the best way I can have
MySQL behave like itself most of the
time, yet win over a flat file in the
above scenario?"
So far, the solution I have posted: use MyISAM and ignore the index, seems to be closest to flat file performance for this use case, while still giving me a database when I need the database.
I'd use a summary table maintained by triggers which gives sub 1 second performance - something like as follows:
select
st.tot - v.val
from
ivi_sum_total st
join
(
select sum(val) as val from ivi where id in (1,12,121,1121)
) v;
+---------------------+
| st.tot - v.val |
+---------------------+
| 1048317638720.78064 |
+---------------------+
1 row in set (0.07 sec)
Full schema
drop table if exists ivi_sum_total;
create table ivi_sum_total
(
tot decimal(65,5) default 0
)
engine=innodb;
drop table if exists ivi;
create table ivi
(
id int unsigned not null auto_increment,
val decimal(65,5) default 0,
primary key (id, val)
)
engine=innodb;
delimiter #
create trigger ivi_before_ins_trig before insert on ivi
for each row
begin
update ivi_sum_total set tot = tot + new.val;
end#
create trigger ivi_before_upd_trig before update on ivi
for each row
begin
update ivi_sum_total set tot = (tot - old.val) + new.val;
end#
-- etc...
Testing
select count(*) from ivi;
+----------+
| count(*) |
+----------+
| 32000000 |
+----------+
select
st.tot - v.val
from
ivi_sum_total st
join
(
select sum(val) as val from ivi where id in (1,12,121,1121)
) v;
+---------------------+
| st.tot - v.val |
+---------------------+
| 1048317638720.78064 |
+---------------------+
1 row in set (0.07 sec)
select sum(val) from ivi where id not in (1,12,121,1121);
+---------------------+
| sum(val) |
+---------------------+
| 1048317638720.78064 |
+---------------------+
1 row in set (29.89 sec)
select * from ivi_sum_total;
+---------------------+
| tot |
+---------------------+
| 1048317683047.43227 |
+---------------------+
1 row in set (0.03 sec)
select * from ivi where id = 2;
+----+-------------+
| id | val |
+----+-------------+
| 2 | 11781.30443 |
+----+-------------+
1 row in set (0.01 sec)
start transaction;
update ivi set val = 0 where id = 2;
commit;
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0
select * from ivi where id = 2;
+----+---------+
| id | val |
+----+---------+
| 2 | 0.00000 |
+----+---------+
1 row in set (0.00 sec)
select * from ivi_sum_total;
+---------------------+
| tot |
+---------------------+
| 1048317671266.12784 |
+---------------------+
1 row in set (0.00 sec)
select
st.tot - v.val
from
ivi_sum_total st
join
(
select sum(val) as val from ivi where id in (1,12,121,1121)
) v;
+---------------------+
| st.tot - v.val |
+---------------------+
| 1048317626939.47621 |
+---------------------+
1 row in set (0.01 sec)
select sum(val) from ivi where id not in (1,12,121,1121);
+---------------------+
| sum(val) |
+---------------------+
| 1048317626939.47621 |
+---------------------+
1 row in set (31.07 sec)
You are comparing apples and oranges as far as I see. MySQL (or any other relational databases) doesn't suppose work with data which does I/O all the time. then you are destroying the meaning of index. Even worse index would become a burden since it doesn't fit to RAM at all. Thats why people use sharding / summary tables. In you example the size of database (so the disk io) would be much more than flat file since there is a primary index on top of data itself. as z5h stated ignoring primary index can save you some time but it would never be as fast as plain text file.
I would suggest you to use summary tables like having a bg job doing a summary and you UNION this summary table with the rest of the "live" table. But even mysql would not handle rapidly growing data well after some 100s of millions it would start to fail. Thats why people are working for distributed systems like hdfs and map/reduce frameworks like hadoop.
P.S: My technical examples are not 100% right, I just want to go through the concepts.
Is it a single-user system?
Performance of Flat file will degrade significantly with multiple users. With DB, it "should" schedule disk reads to satisfy queries running in parallel.
There is one option nobody has consider as of yet...
Since the aforementioned JAVA code uses a HashSet, why not use a Hash Index ?
By default, indexes in MyISAM tables use BTREE indexing.
By default, indexes in MEMORY tables use HASH indexing.
Simply force the MyISAM table to use a HASH index instead of a BTREE
CREATE TABLE `ivi`
(
`id` int(11) NOT NULL AUTO_INCREMENT,
`val` double DEFAULT NULL,
PRIMARY KEY (`id`) USING HASH
) ENGINE=MyISAM;
Now that should level the playing field a litte. However, index range searching has poor performance when using a hash index. If you retrieve one id at a time, it should be faster than your previous testing n MyISAM.
If you want to load the data much faster
Get rid of the AUTO_INCREMENT property
Get rid of the primary key
Use a regular index
CREATE TABLE `ivi`
(
`id` int(11) NOT NULL,
`val` double DEFAULT NULL,
KEY id (`id`) USING HASH
) ENGINE=MyISAM;
Then do something like this:
ALTER TABLE ivi DISABLE KEYS;
...
... (Load data and manually generate id)
...
ALTER TABLE ivi ENABLE KEYS;
This will build the index after it is done being load
You shoudld also consider sizing the key_buffer_size in /etc/my.cnf to handle large numbers of MyISAM keys.
Give it a Try and let us know if this helped and what you found !!!
You might want to have a look at NDBAPI. I imagine these people were able to achieve speeds that are close to working with file but still have the data stored in the InnoDB.

How can I tell when a MySQL table was last updated?

In the footer of my page, I would like to add something like "last updated the xx/xx/200x" with this date being the last time a certain mySQL table has been updated.
What is the best way to do that? Is there a function to retrieve the last updated date? Should I access to the database every time I need this value?
In later versions of MySQL you can use the information_schema database to tell you when another table was updated:
SELECT UPDATE_TIME
FROM information_schema.tables
WHERE TABLE_SCHEMA = 'dbname'
AND TABLE_NAME = 'tabname'
This does of course mean opening a connection to the database.
An alternative option would be to "touch" a particular file whenever the MySQL table is updated:
On database updates:
Open your timestamp file in O_RDRW mode
close it again
or alternatively
use touch(), the PHP equivalent of the utimes() function, to change the file timestamp.
On page display:
use stat() to read back the file modification time.
I'm surprised no one has suggested tracking last update time per row:
mysql> CREATE TABLE foo (
id INT PRIMARY KEY
x INT,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
ON UPDATE CURRENT_TIMESTAMP,
KEY (updated_at)
);
mysql> INSERT INTO foo VALUES (1, NOW() - INTERVAL 3 DAY), (2, NOW());
mysql> SELECT * FROM foo;
+----+------+---------------------+
| id | x | updated_at |
+----+------+---------------------+
| 1 | NULL | 2013-08-18 03:26:28 |
| 2 | NULL | 2013-08-21 03:26:28 |
+----+------+---------------------+
mysql> UPDATE foo SET x = 1234 WHERE id = 1;
This updates the timestamp even though we didn't mention it in the UPDATE.
mysql> SELECT * FROM foo;
+----+------+---------------------+
| id | x | updated_at |
+----+------+---------------------+
| 1 | 1235 | 2013-08-21 03:30:20 | <-- this row has been updated
| 2 | NULL | 2013-08-21 03:26:28 |
+----+------+---------------------+
Now you can query for the MAX():
mysql> SELECT MAX(updated_at) FROM foo;
+---------------------+
| MAX(updated_at) |
+---------------------+
| 2013-08-21 03:30:20 |
+---------------------+
Admittedly, this requires more storage (4 bytes per row for TIMESTAMP).
But this works for InnoDB tables before 5.7.15 version of MySQL, which INFORMATION_SCHEMA.TABLES.UPDATE_TIME doesn't.
I don't have information_schema database, using mysql version 4.1.16, so in this case you can query this:
SHOW TABLE STATUS FROM your_database LIKE 'your_table';
It will return these columns:
| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment |
As you can see there is a column called: "Update_time" that shows you the last update time for your_table.
The simplest thing would be to check the timestamp of the table files on the disk. For example, You can check under your data directory
cd /var/lib/mysql/<mydatabase>
ls -lhtr *.ibd
This should give you the list of all tables with the table when it was last modified the oldest time, first.
For a list of recent table changes use this:
SELECT UPDATE_TIME, TABLE_SCHEMA, TABLE_NAME
FROM information_schema.tables
ORDER BY UPDATE_TIME DESC, TABLE_SCHEMA, TABLE_NAME
I would create a trigger that catches all updates/inserts/deletes and write timestamp in custom table, something like
tablename | timestamp
Just because I don't like the idea to read internal system tables of db server directly
Although there is an accepted answer I don't feel that it is the right one. It is the simplest way to achieve what is needed, but even if already enabled in InnoDB (actually docs tell you that you still should get NULL ...), if you read MySQL docs, even in current version (8.0) using UPDATE_TIME is not the right option, because:
Timestamps are not persisted when the server is restarted or when the
table is evicted from the InnoDB data dictionary cache.
If I understand correctly (can't verify it on a server right now), timestamp gets reset after server restart.
As for real (and, well, costly) solutions, you have Bill Karwin's solution with CURRENT_TIMESTAMP and I'd like to propose a different one, that is based on triggers (I'm using that one).
You start by creating a separate table (or maybe you have some other table that can be used for this purpose) which will work like a storage for global variables (here timestamps). You need to store two fields - table name (or whatever value you'd like to keep here as table id) and timestamp. After you have it, you should initialize it with this table id + starting date (NOW() is a good choice :) ).
Now, you move to tables you want to observe and add triggers AFTER INSERT/UPDATE/DELETE with this or similar procedure:
CREATE PROCEDURE `timestamp_update` ()
BEGIN
UPDATE `SCHEMA_NAME`.`TIMESTAMPS_TABLE_NAME`
SET `timestamp_column`=DATE_FORMAT(NOW(), '%Y-%m-%d %T')
WHERE `table_name_column`='TABLE_NAME';
END
OS level analysis:
Find where the DB is stored on disk:
grep datadir /etc/my.cnf
datadir=/var/lib/mysql
Check for most recent modifications
cd /var/lib/mysql/{db_name}
ls -lrt
Should work on all database types.
a) It will show you all tables and there last update dates
SHOW TABLE STATUS FROM db_name;
then, you can further ask for specific table:
SHOW TABLE STATUS FROM db_name like 'table_name';
b) As in above examples you cannot use sorting on 'Update_time' but using SELECT you can:
SELECT * FROM information_schema.tables WHERE TABLE_SCHEMA='db_name' ORDER BY UPDATE_TIME DESC;
to further ask about particular table:
SELECT * FROM information_schema.tables WHERE TABLE_SCHEMA='db_name' AND table_name='table_name' ORDER BY UPDATE_TIME DESC';
I got this to work locally, but not on my shared host for my public website (rights issue I think).
SELECT last_update FROM mysql.innodb_table_stats WHERE table_name = 'yourTblName';
'2020-10-09 08:25:10'
MySQL 5.7.20-log on Win 8.1
Just grab the file date modified from file system. In my language that is:
tbl_updated = file.update_time(
"C:\ProgramData\MySQL\MySQL Server 5.5\data\mydb\person.frm")
Output:
1/25/2013 06:04:10 AM
If you are running Linux you can use inotify to look at the table or the database directory. inotify is available from PHP, node.js, perl and I suspect most other languages. Of course you must have installed inotify or had your ISP install it. A lot of ISP will not.
Not sure if this would be of any interest. Using mysqlproxy in between mysql and clients, and making use of a lua script to update a key value in memcached according to interesting table changes UPDATE,DELETE,INSERT was the solution which I did quite recently. If the wrapper supported hooks or triggers in php, this could have been eaiser. None of the wrappers as of now does this.
i made a column by name : update-at in phpMyAdmin and got the current time from Date() method in my code (nodejs) . with every change in table this column hold the time of changes.
Same as others, but with some conditions i've used, to save time:
SELECT
UPDATE_TIME,
TABLE_SCHEMA,
TABLE_NAME
FROM
information_schema.tables
WHERE
1 = 1
AND UPDATE_TIME > '2021-11-09 00:00:00'
AND TABLE_SCHEMA = 'db_name_here'
AND TABLE_NAME not in ('table_name_here',)
ORDER BY
UPDATE_TIME DESC,
TABLE_SCHEMA,
TABLE_NAME;
This is what I did, I hope it helps.
<?php
mysql_connect("localhost", "USER", "PASSWORD") or die(mysql_error());
mysql_select_db("information_schema") or die(mysql_error());
$query1 = "SELECT `UPDATE_TIME` FROM `TABLES` WHERE
`TABLE_SCHEMA` LIKE 'DataBaseName' AND `TABLE_NAME` LIKE 'TableName'";
$result1 = mysql_query($query1) or die(mysql_error());
while($row = mysql_fetch_array($result1)) {
echo "<strong>1r tr.: </strong>".$row['UPDATE_TIME'];
}
?>
Cache the query in a global variable when it is not available.
Create a webpage to force the cache to be reloaded when you update it.
Add a call to the reloading page into your deployment scripts.