So, in Laravel's app.php I have the following timezone set:
'timezone' => 'America/Denver',
In MySQL settings I've got the same timezone. When I run select now() I get the current Denver time.
However, when I create a record in any table in the database, the created_at field (with default value set to CURRENT_TIMESTAMP) somehow ends up 5 hours ahead of Denver.
I believe it's somehow defaulting to UTC time, but I am not sure. All online resources I've found related to this issue claim that setting the timezone in Laravel should do the trick.
What else can I do to make sure I get the correct timezone saved in CURRENT_TIMESTAMP?
I don't think server-wide PHP settings should have precedent over what's set in MySQL or in Laravel in this matter, but I have still gone ahead and tried editing the timezone in php.ini to America/Denver and no luck. It was previously commented out (not set to UTC).
Use
SET SESSION time_zone = 'America/Denver';
In a raw query (DB::select(DB::raw("SET SESSION time_zone = 'America/Denver'")) before inserting and updating.
Test case
CREATE TABLE test (
id INT
, created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
INSERT INTO test (id) VALUES(1);
SET SESSION time_zone = 'America/Denver';
INSERT INTO test (id) VALUES(2);
Possible results
| id | created_at |
| --- | ------------------- |
| 1 | 2019-03-04 13:57:31 |
| 2 | 2019-03-04 06:57:31 |
see demo
Eloquent creates a new Carbon object when it sets the timestamp for created_at, it doesn’t use MySQL’s default. This should use date_default_timezone_set, which Laravel is setting.
Rather obvious answer is: have you tried clearing your config cache?
php artisan config:clear
As an aside it is generally advisable to always use UTC across everything, and only convert it to a local timezone at the last possible moment.
From Carbon:
// PS: we recommend you to work with UTC as default timezone and only use
// other timezones (such as the user timezone) on display
So i had a mysql query which used to work as intended but now i think something has happened and it seems to not work anymore.
What i have is 2 tables that i want to join. "
users
and
logs
I want a list of all the user id's(UID) in the "users" table that have not logged in today. So i used this query.
SELECT users.UID
FROM users
LEFT JOIN logs
ON users.UID = logs.UID
AND DATE(logs.SCANTIME )=DATE(SYSDATE())
WHERE logs.UID is null
The above query used to work but now returns with a list of UID's that are not supposed to be there.
If i look into the log today i can see multiple rows, i have omitted all but one for the sake of saving space.
mysql> SELECT UID, SCANTIME FROM logs WHERE DATE(scantime)=DATE(SYSDATE());
+------------+---------------------+
| UID | SCANTIME |
+------------+---------------------+
.............
| AA9B351B | 2017-08-02 06:13:21 |
.............
+------------+---------------------+
63 rows in set (0.00 sec)
So this guy AA9B351B is clearly logged in today. Yet he showes up when i run the query above. Thanks for any replies
The server time may have become incorrect, so SYSDATE is returning the wrong value. Check the system time and also run a SQL query with SYSDATE function replaced with literals (Just to test). Also related to this check the time zone has not changed. If server time wrong then reset.
I have a WEBSITE table like so:
WEBSITE
web ID | url | progress
-------------------------------
1 | example.com | 67
I have a PROGRESS table like so:
PROGRESS
progress id | linking website id | amount to increase
---------------------------------------------------
1 | 1 | 60
2 | 1 | 7
When a row is inserted into PROGRESS using INSERT INTO,
I use php and mysql to get the accumulative values FROM the PROGRESS table, and then store the new value into the progress value FROM the WEBSITE table.
HOWEVER...
I was wondering if I could use these triggers I've been hearing about to automatically sum up the new progress value and store it?
Is this possible?
You don't really need to use a trigger, you can just add the progress to the existing row in website but if you wanted to try a trigger you can use the following
delimiter $$
CREATE TRIGGER progress_update AFTER INSERT ON `progress`
FOR EACH ROW BEGIN
UPDATE `website` SET `progress` = `progress` + NEW.ammount_to_increase WHERE web_ID = NEW.linking_website_id;
END;
$$
delimiter ;
You can also find further information on the syntax and use of triggers in on the manual page 12.1.11. CREATE TRIGGER Syntax
You could use triggers, but you don't need to; just do this:
update website set
progress = progress + ?
where web_id = ?
And take you values from your query. This is guaranteed to work, because the update is atomic (you don't have to worry about other processes inserting/updating concurrently)
I am working with MySQL and using stored procedures. I have a profiling tool that I am using to profile the code that communicates with MySQL through the stored procedures and I was wondering if there was a tool or capability within MySQL client to profile stored procedure executions. What I have in mind is something that's similar to running queries with profiling turned on. I am using MySQL 5.0.41 on Windows XP.
Thanks in advance.
There is a wonderfully detailed article about such profiling: http://mablomy.blogspot.com/2015/03/profiling-stored-procedures-in-mysql-57.html
As of MySQL 5.7, you can use performance_schema to get informations about the duration of every statement in a stored procedure. Simply:
1) Activate the profiling (use "NO" afterward if you want to disable it)
UPDATE performance_schema.setup_consumers SET ENABLED="YES"
WHERE NAME = "events_statements_history_long";
2) Run the procedure
CALL test('with parameters', '{"if": "needed"}');
3) Query the performance schema to get the overall event informations
SELECT event_id,sql_text,
CONCAT(TIMER_WAIT/1000000000,"ms") AS time
FROM performance_schema.events_statements_history_long
WHERE event_name="statement/sql/call_procedure";
| event_id | sql_text | time |
|2432 | CALL test(...) | 1726.4098ms |
4) Get the detailed informations of the event you want to profile
SELECT EVENT_NAME, SQL_TEXT,
CONCAT(TIMER_WAIT/1000000000,"ms") AS time
FROM performance_schema.events_statements_history_long
WHERE nesting_event_id=2432 ORDER BY event_id;
| EVENT_NAME | SQL_TEXT | time |
| statement/sp/stmt | ... 1 query of the procedure ... | 4.6718ms |
| statement/sp/stmt | ... another query of the procedure ... | 4.6718ms |
| statement/sp/stmt | ... another etc ... | 4.6718ms |
This way, you can tell which query takes the longest time in your procedure call.
I don't know any tool that would turn this resultset into a KCachegrind friendly file or so.
Note that this should not be activated on production server (might be a performance issue, a data size bump, and since performance_schema.events_statements_history_long holds the procedure's parameters values, then it might be a security issue [if procedure's parameter is a final user email or password for instance])
You can turn on the slow query logging within MySQL.
Take a look at this other SO question:
MYSQL Slow Query
Depending on which version, you may actually be able to set the value to zero, so every single query in the DB is shown in the slow query log.
See here for additional details:
http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html#sysvar_long_query_time
In the footer of my page, I would like to add something like "last updated the xx/xx/200x" with this date being the last time a certain mySQL table has been updated.
What is the best way to do that? Is there a function to retrieve the last updated date? Should I access to the database every time I need this value?
In later versions of MySQL you can use the information_schema database to tell you when another table was updated:
SELECT UPDATE_TIME
FROM information_schema.tables
WHERE TABLE_SCHEMA = 'dbname'
AND TABLE_NAME = 'tabname'
This does of course mean opening a connection to the database.
An alternative option would be to "touch" a particular file whenever the MySQL table is updated:
On database updates:
Open your timestamp file in O_RDRW mode
close it again
or alternatively
use touch(), the PHP equivalent of the utimes() function, to change the file timestamp.
On page display:
use stat() to read back the file modification time.
I'm surprised no one has suggested tracking last update time per row:
mysql> CREATE TABLE foo (
id INT PRIMARY KEY
x INT,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
ON UPDATE CURRENT_TIMESTAMP,
KEY (updated_at)
);
mysql> INSERT INTO foo VALUES (1, NOW() - INTERVAL 3 DAY), (2, NOW());
mysql> SELECT * FROM foo;
+----+------+---------------------+
| id | x | updated_at |
+----+------+---------------------+
| 1 | NULL | 2013-08-18 03:26:28 |
| 2 | NULL | 2013-08-21 03:26:28 |
+----+------+---------------------+
mysql> UPDATE foo SET x = 1234 WHERE id = 1;
This updates the timestamp even though we didn't mention it in the UPDATE.
mysql> SELECT * FROM foo;
+----+------+---------------------+
| id | x | updated_at |
+----+------+---------------------+
| 1 | 1235 | 2013-08-21 03:30:20 | <-- this row has been updated
| 2 | NULL | 2013-08-21 03:26:28 |
+----+------+---------------------+
Now you can query for the MAX():
mysql> SELECT MAX(updated_at) FROM foo;
+---------------------+
| MAX(updated_at) |
+---------------------+
| 2013-08-21 03:30:20 |
+---------------------+
Admittedly, this requires more storage (4 bytes per row for TIMESTAMP).
But this works for InnoDB tables before 5.7.15 version of MySQL, which INFORMATION_SCHEMA.TABLES.UPDATE_TIME doesn't.
I don't have information_schema database, using mysql version 4.1.16, so in this case you can query this:
SHOW TABLE STATUS FROM your_database LIKE 'your_table';
It will return these columns:
| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment |
As you can see there is a column called: "Update_time" that shows you the last update time for your_table.
The simplest thing would be to check the timestamp of the table files on the disk. For example, You can check under your data directory
cd /var/lib/mysql/<mydatabase>
ls -lhtr *.ibd
This should give you the list of all tables with the table when it was last modified the oldest time, first.
For a list of recent table changes use this:
SELECT UPDATE_TIME, TABLE_SCHEMA, TABLE_NAME
FROM information_schema.tables
ORDER BY UPDATE_TIME DESC, TABLE_SCHEMA, TABLE_NAME
I would create a trigger that catches all updates/inserts/deletes and write timestamp in custom table, something like
tablename | timestamp
Just because I don't like the idea to read internal system tables of db server directly
Although there is an accepted answer I don't feel that it is the right one. It is the simplest way to achieve what is needed, but even if already enabled in InnoDB (actually docs tell you that you still should get NULL ...), if you read MySQL docs, even in current version (8.0) using UPDATE_TIME is not the right option, because:
Timestamps are not persisted when the server is restarted or when the
table is evicted from the InnoDB data dictionary cache.
If I understand correctly (can't verify it on a server right now), timestamp gets reset after server restart.
As for real (and, well, costly) solutions, you have Bill Karwin's solution with CURRENT_TIMESTAMP and I'd like to propose a different one, that is based on triggers (I'm using that one).
You start by creating a separate table (or maybe you have some other table that can be used for this purpose) which will work like a storage for global variables (here timestamps). You need to store two fields - table name (or whatever value you'd like to keep here as table id) and timestamp. After you have it, you should initialize it with this table id + starting date (NOW() is a good choice :) ).
Now, you move to tables you want to observe and add triggers AFTER INSERT/UPDATE/DELETE with this or similar procedure:
CREATE PROCEDURE `timestamp_update` ()
BEGIN
UPDATE `SCHEMA_NAME`.`TIMESTAMPS_TABLE_NAME`
SET `timestamp_column`=DATE_FORMAT(NOW(), '%Y-%m-%d %T')
WHERE `table_name_column`='TABLE_NAME';
END
OS level analysis:
Find where the DB is stored on disk:
grep datadir /etc/my.cnf
datadir=/var/lib/mysql
Check for most recent modifications
cd /var/lib/mysql/{db_name}
ls -lrt
Should work on all database types.
a) It will show you all tables and there last update dates
SHOW TABLE STATUS FROM db_name;
then, you can further ask for specific table:
SHOW TABLE STATUS FROM db_name like 'table_name';
b) As in above examples you cannot use sorting on 'Update_time' but using SELECT you can:
SELECT * FROM information_schema.tables WHERE TABLE_SCHEMA='db_name' ORDER BY UPDATE_TIME DESC;
to further ask about particular table:
SELECT * FROM information_schema.tables WHERE TABLE_SCHEMA='db_name' AND table_name='table_name' ORDER BY UPDATE_TIME DESC';
I got this to work locally, but not on my shared host for my public website (rights issue I think).
SELECT last_update FROM mysql.innodb_table_stats WHERE table_name = 'yourTblName';
'2020-10-09 08:25:10'
MySQL 5.7.20-log on Win 8.1
Just grab the file date modified from file system. In my language that is:
tbl_updated = file.update_time(
"C:\ProgramData\MySQL\MySQL Server 5.5\data\mydb\person.frm")
Output:
1/25/2013 06:04:10 AM
If you are running Linux you can use inotify to look at the table or the database directory. inotify is available from PHP, node.js, perl and I suspect most other languages. Of course you must have installed inotify or had your ISP install it. A lot of ISP will not.
Not sure if this would be of any interest. Using mysqlproxy in between mysql and clients, and making use of a lua script to update a key value in memcached according to interesting table changes UPDATE,DELETE,INSERT was the solution which I did quite recently. If the wrapper supported hooks or triggers in php, this could have been eaiser. None of the wrappers as of now does this.
i made a column by name : update-at in phpMyAdmin and got the current time from Date() method in my code (nodejs) . with every change in table this column hold the time of changes.
Same as others, but with some conditions i've used, to save time:
SELECT
UPDATE_TIME,
TABLE_SCHEMA,
TABLE_NAME
FROM
information_schema.tables
WHERE
1 = 1
AND UPDATE_TIME > '2021-11-09 00:00:00'
AND TABLE_SCHEMA = 'db_name_here'
AND TABLE_NAME not in ('table_name_here',)
ORDER BY
UPDATE_TIME DESC,
TABLE_SCHEMA,
TABLE_NAME;
This is what I did, I hope it helps.
<?php
mysql_connect("localhost", "USER", "PASSWORD") or die(mysql_error());
mysql_select_db("information_schema") or die(mysql_error());
$query1 = "SELECT `UPDATE_TIME` FROM `TABLES` WHERE
`TABLE_SCHEMA` LIKE 'DataBaseName' AND `TABLE_NAME` LIKE 'TableName'";
$result1 = mysql_query($query1) or die(mysql_error());
while($row = mysql_fetch_array($result1)) {
echo "<strong>1r tr.: </strong>".$row['UPDATE_TIME'];
}
?>
Cache the query in a global variable when it is not available.
Create a webpage to force the cache to be reloaded when you update it.
Add a call to the reloading page into your deployment scripts.