Our AWS MySQL RDS replica database stopped replicating over this error. What is the best solution to approach it? If it keeps happening I doubt we can have a replica. We also can't just allow any date format to come in. Should we fix the problem or ignore the error.
The data type of last_used is datetime(6).
2022-09-15T17:11:10.044407Z 15395 [ERROR] [MY-010584] [Repl] Slave SQL for channel '': Worker 1 failed executing transaction 'ANONYMOUS' at master log mysql-bin-changelog.041268, end_log_pos 17028740; Error 'Incorrect DATETIME value: '2022-09-15 13:11:10.-99999'' on query. Default database: 'company_name'. Query: 'UPDATE numbers SET current_url = www',last_used = '2022-09-15 13:11:10.000001' WHERE tracking IN (8886424548) AND profile = 111111 AND (last_used < '2022-09-15 13:11:10.-99999' OR last_used IS NULL)', Error_code: MY-001525
We have now tried using 101 ms or 99 ms and it returns the correct value. So we have changed it to use 101 ms. It would be nice to know how we can keep this from happening in all cases. Thanks!
AWS' team came back with an answer on this error:
As per the error, you are using wrong DATETIME format '2022-09-15 13:11:10.-99999' for the UPDATE query on the table 'company'. The DATETIME format which you are using is having negative precision for ms. in order to fix this error kindly change the ms value to positive. fractional seconds can never be negative.
Environment
Spring-boot 2.6.6
Spring-data-jpa 2.6.6
Kotlin 1.6.10
mysql:mysql-connector-java 8.0.23
mysql 5.7 (innoDB)
Problem
A transaction commit happens during execution of my transactional method and I don't know why.
Code
#Transactional(isolation = Isolation.SERIALIZABLE)
fun login(username: String) {
userRepository.findByUsername(username)
?.let{
it.lastLoggedIn = LocalDateTime.now()
userRepository.save(it)
} :? run {
throw Exception("Not found User")
}
}
Result (from APM mornitoring tool)
-------------------------------------------------------------------------------------
p# # TIME T-GAP CONTENTS
-------------------------------------------------------------------------------------
[******] 18:50:13.493 0 Start transaction
- [000003] 18:50:13.639 40 spring-tx-doBegin(PROPAGATION_REQUIRED,ISOLATION_SERIALIZABLE)
- [000004] 18:50:13.639 0 getConnection jdbc:mysql://....(com.zaxxer.hikari.HikariDataSource#getConnection) 0 ms
- [000005] 18:50:13.641 2 [PREPARED] select ...{ellipsis} from user user0_ where user0_.username=? 0 ms
- [000006] 18:50:13.641 0 Fetch Count 1 1 ms
- [000007] 18:50:13.642 1 spring-tx-doCommit
- [000008] 18:50:13.643 1 [PREPARED] update user set last_logged_in=? where id=? 0 ms
[******] 18:50:13.646 3 End transaction
My though
I set transaction isolation level to serializable for some reason. As I know, it affects select query to make it with 'lock in share mode' to get a shared lock and an exclusive lock is needed for execution of the update query. An exclusive lock cannot be acquired on a row locked in shared mode so I thought it commits the transaction first to release the shared lock to get exclusive lock... but these query executions are in the same transaction, in the same DB connection, so in the same DB session. I think it doesn't make sense that the shared lock must be released to get exclusive lock in the same session.
What am I missing?
After upgrading drill on our cluster to drill-1.12.0-mapr, testing our daily ETL scripts (which all use drill for converting parquet files to tsv), a validation error ("table or view with given name already exists") is always thrown when trying to run a CREATE TABLE statement on some empty directories in a writable workspace.
[Error Id: 6ea46737-8b6a-4887-a671-4bddbea02476 on mapr002.ucera.local:31010]
at org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:489)
at org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:561)
:
:
:
Caused by: org.apache.drill.common.exceptions.UserRemoteException: VALIDATION ERROR: A table or view with given name [/internal_etl/project/version-2/stages/storage/ACCOUNT/tsv] already exists in schema [dfs.etl_internal]
After some brief debugging, I see that the FS directory in question under the specified dfs.etl_interal workspace (ie. /internal_etl/project/version-2/stages/storage/ACCOUNT/tsv) is in fact empty, yet still throwing these errors.
Looking for the error ID in the drillbit.log file in the associated node in the error message above, we see
2018-12-04 10:13:25,285 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.drill.exec.work.foreman.Foreman - Query text for query id 23f92019-db56-862f-e7b9-cd51b3e174ae: create table dfs.etl_internal.`/internal_etl/project/version-2/stages/storage/ACCOUNT/tsv` as
select <a bunch of fields>
from dfs.etl_internal.`/internal_etl/project/version-2/stages/storage/ACCOUNT/parquet`
2018-12-04 10:13:25,406 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2018-12-04 10:13:25,408 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2018-12-04 10:13:25,893 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2018-12-04 10:13:25,894 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2018-12-04 10:13:25,898 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2018-12-04 10:13:25,898 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2018-12-04 10:13:25,905 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.a.d.e.p.s.h.CreateTableHandler - User Error Occurred: A table or view with given name [/internal_etl/project/version-2/stages/storage/ACCOUNT/tsv] already exists in schema [dfs.etl_internal]
org.apache.drill.common.exceptions.UserException: VALIDATION ERROR: A table or view with given name [/internal_etl/project/version-2/stages/storage/ACCOUNT/tsv] already exists in schema [dfs.etl_internal]
[Error Id: 45177abc-7e9f-4678-959f-f9e0e38bc564 ]
at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:586) ~[drill-common-1.12.0-mapr.jar:1.12.0-mapr]
at org.apache.drill.exec.planner.sql.handlers.CreateTableHandler.checkTableCreationPossibility(CreateTableHandler.java:326) [drill-java-exec-1.12.0-mapr.jar:1.12.0-mapr]
at org.apache.drill.exec.planner.sql.handlers.CreateTableHandler.getPlan(CreateTableHandler.java:90) [drill-java-exec-1.12.0-mapr.jar:1.12.0-mapr]
at org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan(DrillSqlWorker.java:131) [drill-java-exec-1.12.0-mapr.jar:1.12.0-mapr]
at org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:79) [drill-java-exec-1.12.0-mapr.jar:1.12.0-mapr]
at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:567) [drill-java-exec-1.12.0-mapr.jar:1.12.0-mapr]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:264) [drill-java-exec-1.12.0-mapr.jar:1.12.0-mapr]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
2018-12-04 10:13:25,924 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.apache.drill.exec.work.WorkManager - Waiting for 0 queries to complete before shutting down
2018-12-04 10:13:25,924 [23f92019-db56-862f-e7b9-cd51b3e174ae:foreman] INFO o.apache.drill.exec.work.WorkManager - Waiting for 0 running fragments to complete before shutting down
This error occurs even when using DROP TABLE [IF EXISTS] <workspace>.<table path name> before the CREATE TABLE statement. Furthermore, the configurations for the dfs workspace itself does not appear to be changed from before upgrading to drill-1.12, see below:
:
:
"workspaces": {
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null,
"allowAccessOutsideWorkspace": false
},
"tmp": {
"location": "/tmp",
"writable": true,
"defaultInputFormat": null,
"allowAccessOutsideWorkspace": false
},
"etl_internal": {
"location": "/etl/internal",
"writable": true,
"defaultInputFormat": null,
"allowAccessOutsideWorkspace": false
}
},
:
:
Note that the full process in question is intended to mv the directory contents every day and CREATE TABLE with new data from current day (in case that makes a difference) and this process had been working fine when we were using drill-1.11.
More debugging information:
Simply deleting the .../tsv endpoint folder and relying on drill to make the directory during the CREATE TABLE statement does not work. Throws the unsurprising error
Error: VALIDATION ERROR: Table [/internal_etl/project/version-2/stages/storage/ACCOUNT/tsv] not found
[Error Id: 02e7c088-9162-4731-9fa8-85dfd39e1dec on mapr001.ucera.local:31010] (state=,code=0)
Ie. drill does not appear to be automatically creating the table.
Undoing these changes and rerunning to get the original error, we can examine the location via the sqlline interpreter interface. Doing so, we see
0: jdbc:drill:zk=mapr001:5181,mapr002:5181,ma> describe dfs.etl_internal.`/internal_etl/project/version-2/stages/storage/ACCOUNT/tsv`;
+--------------+------------+--------------+
| COLUMN_NAME | DATA_TYPE | IS_NULLABLE |
+--------------+------------+--------------+
+--------------+------------+--------------+
No rows selected (1.791 seconds)
So it sees something there, but only when I make it myself, which is like a catch-22 given that the original error is complaining that something is already there.
If anyone with more experience using drill knows what could be happening here, any opinions or advice would be appreciated.
Looks like you have made some mistake in the process of updating Drill version on your MapR cluster. Please see this doc for more info: http://doc.mapr.com/display/MapR/Upgrading+to+the+Latest+Version+of+Drill
or the last docs in case you are using the latest MapR Core version:
https://mapr.com/docs/home/UpgradeGuide/PreupgradeStepsDrill.html?hl=drill%2Cupgrade
https://mapr.com/docs/home/UpgradeGuide/PostUpgradeStepsDrill.html?hl=drill%2Cupgrade
DROP TABLE for Drill schemaless tables works fine. See more info about Drill schemaless tables (empty directories):
https://drill.apache.org/docs/data-sources-and-file-formats-introduction/#schemaless-tables
TLDR: restarted the drillbits on the nodes and everything appears to be working now.
What was done to get drill to run the CTAS statement without error was:
Restart the drill services from the MapR MCS. This was done purely based
on a hunch due to the hanging-drill-1.11-processes issue encountered
earlier where after upgrading from drill-1.11 to drill-1.12, had problem where needed to manually go to each node, jps to see that drillbit 1.11 was still running, and kill -9 <pid of 1.11 drillbit>, and restart the drillbits to get 1.12 working. Not sure how much this helped,
but documenting as it was the only change made in the process of
debugging that was not undone before running
the the changes that ultimately appear to have now resolved the
error.
Changed the drill-using scripts to delete the target folder (hadoop fs -rm -r /hdfs/path/to/folder) of the CTAS statement after running some necessary processes on it and then letting the CTAS statement re-create it itself (even though as previously mentioned in original post, tried this earlier and received "Table not found" errors in a weird catch-22 situation (thus my thinking that restarting the drill services may have contributed)).
I know that just restarting the services may not be the best most informative answer, but that's what appeared to work here. If anyone as any more information or thoughts to add based on the solution description above do please leave a comment.
I am running two MySQL databases -- one is on an Amazon AWS cloud server, and another is running on a server in my network.
These two databases are replicating normally in a multi-master arrangement seemingly without issue, but then every once in a while -- a few times a day -- I get an error in my application saying "Plugin instructed the server to rollback the current transaction."
The error persists for a few minutes (around least 15 minutes), and then goes back to replicating normally again. In the MySQL Error logs, I don't see any error, but in the normal log file I do see the rollback happening:
2018-09-10T22:50:25.185065Z 4342 Query UPDATE `visit_team` SET `created` = '2018-09-10 12:34:56.306918', `last_updated` = '2018-09-10 22:50:25.183904', `last_changed` = '2018-09-10 22:50:25.183904', `visit_id` = 'J8R2QY', `station_type_id` = 'puffin', `current_state_id` = 680 WHERE `visit_team`.`uuid` = 'S80OSQ'
2018-09-10T22:50:25.185408Z 4342 Query commit
2018-09-10T22:50:25.222304Z 4340 Quit
2018-09-10T22:50:25.226917Z 4341 Query set autocommit=1
2018-09-10T22:50:25.240787Z 4341 Query SELECT `program_nodeconfig`.`id`, `program_nodeconfig`.`program_id`, `program_nodeconfig`.`node_id`, `program_nodeconfig`.`application_id`, `program_nodeconfig`.`bundle_version_id`, `program_nodeconfig`.`arguments`, `program_nodeconfig`.`station_type_id` FROM `program_nodeconfig` INNER JOIN `supervisor_node` ON (`program_nodeconfig`.`node_id` = `supervisor_node`.`id`) WHERE (`program_nodeconfig`.`program_id` = 'rwrs' AND `supervisor_node`.`cluster_id` = 2 AND `program_nodeconfig`.`station_type_id` = 'osprey')
... Six more select statements happen here, but removed for brevity...
2018-09-10T22:50:25.253520Z 4342 Query rollback
2018-09-10T22:50:25.253624Z 4342 Query set autocommit=1
In the log file above, the Query UPDATE that is attempted in the first line gets rolled back even after the commit statement, and at 2018-09-10T22:50:25.254394I received an application error saying that the query was rolled back.
I've seen the error when connecting to both databases -- both the cloud and internal.
Does anyone know what would cause the replication to fail randomly, but on a regular basis, and then go back to working again?
How do i disable unsafe statement for binary logging Warning Message in Error log in MySQL 5.5 version.
I don't want to change my binlog format to Row or Mixed Mode.
In Percona there is variable log_warnings_suppress = 1592
Is there anything like this in MySQL ?
Thanks,
Ash
If you are getting that from a DELETE with a LIMIT, there is a workaround.
Do a SELECT with the same ORDER BY and LIMIT to get the id or range of ids that need to be deleted.
Perform the DELETE with that id or IN ( ... ) or id BETWEEN ... AND ....
#ircmaxwell he isn't actually hiding the warning (in this case) he is suppressing a warning that is not a requirement for his set up. This is warning on an unsafe 'binary log statement', which could just be an update with a limit clause.. for example.
Its normally 'fixed' by setting replicatuion to 'row' or 'mixed'. If that is not wanted, then Percona came up with the solution to 'hide' it.