Our AWS MySQL RDS replica database stopped replicating over this error. What is the best solution to approach it? If it keeps happening I doubt we can have a replica. We also can't just allow any date format to come in. Should we fix the problem or ignore the error.
The data type of last_used is datetime(6).
2022-09-15T17:11:10.044407Z 15395 [ERROR] [MY-010584] [Repl] Slave SQL for channel '': Worker 1 failed executing transaction 'ANONYMOUS' at master log mysql-bin-changelog.041268, end_log_pos 17028740; Error 'Incorrect DATETIME value: '2022-09-15 13:11:10.-99999'' on query. Default database: 'company_name'. Query: 'UPDATE numbers SET current_url = www',last_used = '2022-09-15 13:11:10.000001' WHERE tracking IN (8886424548) AND profile = 111111 AND (last_used < '2022-09-15 13:11:10.-99999' OR last_used IS NULL)', Error_code: MY-001525
We have now tried using 101 ms or 99 ms and it returns the correct value. So we have changed it to use 101 ms. It would be nice to know how we can keep this from happening in all cases. Thanks!
AWS' team came back with an answer on this error:
As per the error, you are using wrong DATETIME format '2022-09-15 13:11:10.-99999' for the UPDATE query on the table 'company'. The DATETIME format which you are using is having negative precision for ms. in order to fix this error kindly change the ms value to positive. fractional seconds can never be negative.
Related
Since MySQL 8.0.27, multithreading is now enabled by default for replica servers. Source
Until then, if the replication failed, we could get the exact error from Last_Error in the result of show replica status\G;. Now, the query is replaced by "Anonymous":
Coordinator stopped because there were error(s) in the worker(s). The
most recent failure being: Worker 1 failed executing transaction
'ANONYMOUS' at master log mysql-bin.031116, end_log_pos 81744270. See
error log and/or
performance_schema.replication_applier_status_by_worker table for more
details about this failure or others, if any.
The table performance_schema.replication_applier_status_by_worker does not contain the exact error either:
mysql> select * from performance_schema.replication_applier_status_by_worker\G
*************************** 1. row ***************************
CHANNEL_NAME:
WORKER_ID: 1
THREAD_ID: 128
SERVICE_STATE: ON
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_APPLIED_TRANSACTION: ANONYMOUS
LAST_APPLIED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 2021-11-16 11:35:04.414021
LAST_APPLIED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 2021-11-16 11:35:04.414021
LAST_APPLIED_TRANSACTION_START_APPLY_TIMESTAMP: 2021-11-16 11:35:04.416898
LAST_APPLIED_TRANSACTION_END_APPLY_TIMESTAMP: 2021-11-16 11:35:04.420018
APPLYING_TRANSACTION:
APPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
APPLYING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
APPLYING_TRANSACTION_START_APPLY_TIMESTAMP: 0000-00-00 00:00:00.000000
LAST_APPLIED_TRANSACTION_RETRIES_COUNT: 0
LAST_APPLIED_TRANSACTION_LAST_TRANSIENT_ERROR_NUMBER: 0
LAST_APPLIED_TRANSACTION_LAST_TRANSIENT_ERROR_MESSAGE:
LAST_APPLIED_TRANSACTION_LAST_TRANSIENT_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000
APPLYING_TRANSACTION_RETRIES_COUNT: 0
APPLYING_TRANSACTION_LAST_TRANSIENT_ERROR_NUMBER: 0
APPLYING_TRANSACTION_LAST_TRANSIENT_ERROR_MESSAGE:
APPLYING_TRANSACTION_LAST_TRANSIENT_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000
-- <I've removed 3 other similar blocks>
I can indeed find the error in the MySQL error log (e.g. "Could not execute Write_rows event on table db.table; Duplicate entry '16737' for key 'table.PRIMARY'"), but not from a query anymore.
Is there another query that would give me this last error message? Or a specific setting to log it & display it under show replica status\G;?
Actually, you are correct. Based on the documentation, once you are using MTR (multi-threaded replication), the SQL thread is the coordinator for worker threads. Here’s what the documentation has to say about it:
If the replica is multithreaded, the SQL thread is the coordinator for worker threads. In this case, the Last_SQL_Error field shows exactly what the Last_Error_Message column in the Performance Schema replication_applier_status_by_coordinator table shows. The field value is modified to suggest that there may be more failures in the other worker threads which can be seen in the replication_applier_status_by_worker table that shows each worker thread's status. If that table is not available, the replica error log can be used. The log or the replication_applier_status_by_worker table should also be used to learn more about the failure shown by SHOW SLAVE STATUS or the coordinator table.
In that case, you can check with the performance_schema.replication_applier_status_by_coordinator then know more via performance_schema.replication_applier_status_by_worker . Otherwise, error log can be used.
Context : a Symfony 4.4 web app hosted on Ubuntu-based Docker image on Azure Web App connected to a MySQL 5.7 Azure Database for MySQL .
We have MANY (>5K events in Sentry per 14 days) errors like :
Case 1
Doctrine\DBAL\DBALException
An exception occurred while executing 'SELECT qd.id as uuid, qd.content as content FROM queue_data qd WHERE qd.tag = ? LIMIT 1000' with params [...]:
Warning: Error while sending QUERY packet. PID=...
Coming from :
// App\Utility\Queue\Service\QueueDataService::getData
public function getData(string $tag, int $limit = self::DEFAULT_LIMIT): array
{
return $this->getQueryBuilderForTag($tag)
->select('qd.id as uuid', 'qd.content as content')
->setMaxResults($limit)
->execute()
->fetchAll(FetchMode::ASSOCIATIVE);
}
Case 2
Doctrine\DBAL\DBALException
An exception occurred while executing 'SET NAMES utf8mb4':
Warning: Error while sending QUERY packet. PID=...
Coming from :
// custom code
private function myMethod(){
...
$this->connection->executeQuery('SET NAMES utf8mb4');
...
}
...
Sentry shows 245 "issues" with this message (over 14 days) => that's 245 different cases of the same problem, each instance having between 1 and 2K events (some instances actually come from consumers that are executed VERY frequently).
Nevertheless, it doesn't seem to have any impact on users...
Does anyone else have the same issues ?
Is it possible to fix this ?
How ?
Cheers !
Found the issue :
on my DB server (MySQL), I had the parameter wait_timeout set to 120 seconds
I have several message consumer processes managed by Supervisor
these workers consume several messages per process, and had no time-limit , so they could be waiting for a new message to consume for more than 2 minutes
in that case, the DB server had closed the DB connection, but the Doctrine client was not aware of that until it tried to execute a request that would fail
My fix was to :
increase the DB server config wait_timeout to 600 seconds (10 minutes)
add a time-limit of 300 seconds (5 minutes) to my consumer processes
The root cause should still be addressed by the Doctrine team : it seams weird that it doesn't ping the server (test if the connection is still open) before trying to execute a query.
Cheers !
I am running two MySQL databases -- one is on an Amazon AWS cloud server, and another is running on a server in my network.
These two databases are replicating normally in a multi-master arrangement seemingly without issue, but then every once in a while -- a few times a day -- I get an error in my application saying "Plugin instructed the server to rollback the current transaction."
The error persists for a few minutes (around least 15 minutes), and then goes back to replicating normally again. In the MySQL Error logs, I don't see any error, but in the normal log file I do see the rollback happening:
2018-09-10T22:50:25.185065Z 4342 Query UPDATE `visit_team` SET `created` = '2018-09-10 12:34:56.306918', `last_updated` = '2018-09-10 22:50:25.183904', `last_changed` = '2018-09-10 22:50:25.183904', `visit_id` = 'J8R2QY', `station_type_id` = 'puffin', `current_state_id` = 680 WHERE `visit_team`.`uuid` = 'S80OSQ'
2018-09-10T22:50:25.185408Z 4342 Query commit
2018-09-10T22:50:25.222304Z 4340 Quit
2018-09-10T22:50:25.226917Z 4341 Query set autocommit=1
2018-09-10T22:50:25.240787Z 4341 Query SELECT `program_nodeconfig`.`id`, `program_nodeconfig`.`program_id`, `program_nodeconfig`.`node_id`, `program_nodeconfig`.`application_id`, `program_nodeconfig`.`bundle_version_id`, `program_nodeconfig`.`arguments`, `program_nodeconfig`.`station_type_id` FROM `program_nodeconfig` INNER JOIN `supervisor_node` ON (`program_nodeconfig`.`node_id` = `supervisor_node`.`id`) WHERE (`program_nodeconfig`.`program_id` = 'rwrs' AND `supervisor_node`.`cluster_id` = 2 AND `program_nodeconfig`.`station_type_id` = 'osprey')
... Six more select statements happen here, but removed for brevity...
2018-09-10T22:50:25.253520Z 4342 Query rollback
2018-09-10T22:50:25.253624Z 4342 Query set autocommit=1
In the log file above, the Query UPDATE that is attempted in the first line gets rolled back even after the commit statement, and at 2018-09-10T22:50:25.254394I received an application error saying that the query was rolled back.
I've seen the error when connecting to both databases -- both the cloud and internal.
Does anyone know what would cause the replication to fail randomly, but on a regular basis, and then go back to working again?
I have an ssis package which uses SQL command to get data from Progress database. Every time I execute the query, it throws this specific error:
ERROR [HY000] [DataDirect][ODBC Progress OpenEdge Wire Protocol driver][OPENEDGE]Internal error -1 (buffer too small for generated record) in SQL from subsystem RECORD SERVICES function recPutLONG called from sts_srtt_t:::add_row on (ttbl# 4, len/maxlen/reqlen = 33/32/33) for . Save log for Progress technical support.
I am running the following query:
Select max(ROWID) as maxRowID from TableA
GROUP BY ColumnA,ColumnB,ColumnC,ColumnD
I've had the same error.
After change startup-parameter -SQLTempStorePageSize and -SQLTempStoreBuff to 24 and 3000 respectively the problem was solved.
I think, for you the values must be changed to 40 and 20000.
You can find more information here. The name of the parameter in that article was a bit different than in my Database, it depends on the Progress-version witch is used.
I got this error when I'm trying to insert the value 2016-03-27T03:15:51.213 to the column with the data type 'timestamp' in my Yii1 app:
exception 'CDbException' with message 'CDbCommand failed to execute the SQL statement: SQLSTATE[22007]: Invalid datetime format: 1292 Incorrect datetime value: '2016-03-27T03:15:51.213' for column 'created' at row 1.
The strangest thing, when I try to insert a 2016-03-27T13:15:51.213 value - everythin's ok. What's wrong?
I use OpenServer on my Windows machine with PHP 5.6 and MySql 5.7
Finally I've found the solution. The reason of the problem was on my Windows machine. I had an active option "Automatic transition to winter and summer time".
So, because of this option my computer didn't know about time between 3AM and 4AM when the timezone changes, because when the option is turned on, the given time does not physically exist))) So simple)
When I turned off this option on my PC and reboot it, the message had disappeared from logs.
BTW: this problem won't appear at PCs with Linux and UTC-settings without automatic transitions to winter and summer time.
Hope my answer will be helpful for somebody.
Do something like this. First create a datetime object from the string datetime
$datetime = date("Y-m-d h:i:s", strtotime("2016-03-27T13:15:51.213"));// Output = 2016.03.27 01:15:51
And then use this $datetime in the sql query.