Error while trying to run connector class 'io.debezium.connector.mysql.MySqlConnector' - mysql

We have a Java 11 Spring boot app integrated with Kafka/Debezium.
On a Mysql (5.7) database database change is made (user dropped/ tables recreated with data) Debezium's binlogreader starts to read the bin logs, but then logs the following Exception
2021-07-22 10:05:08.584 INFO 24570 --- [debyte.com:3306] i.d.r.history.DatabaseHistoryMetrics : Already applied 10969 database changes
2021-07-22 10:05:10.134 ERROR 24570 --- [debyte.com:3306] i.debezium.connector.mysql.BinlogReader : Error while deserializing binlog event at offset {ts_sec=1626941231, file=mysql-bin.000532, pos=75496603, server_id=2001, event=876}.
Use the mysqlbinlog tool to view the problematic event: mysqlbinlog --start-position=82610388 --stop-position=82618497 --verbose mysql-bin.000532
2021-07-22 10:05:10.135 ERROR 24570 --- [debyte.com:3306] i.debezium.connector.mysql.BinlogReader : Error during binlog processing. Last offset stored = {ts_sec=1626941227, file=mysql-bin.000532, pos=71066122, row=272, server_id=2001, event=518}, binlog reader near position = mysql-bin.000532/82610388
2021-07-22 10:05:10.150 ERROR 24570 --- [debyte.com:3306] i.debezium.connector.mysql.BinlogReader : Failed due to error: Error processing binlog event
org.apache.kafka.connect.errors.ConnectException: com.github.shyiko.mysql.binlog.event.deserialization.EventDataDeserializationException: Failed to deserialize data of EventHeaderV4{timestamp=1626941231000, eventType=EXT_WRITE_ROWS, serverId=2001, headerLength=19, dataLength=8090, nextPosition=82618497, flags=0}
at io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:230) ~[debezium-connector-mysql-1.0.0.Final.jar:1.0.0.Final]
at io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:207) ~[debezium-connector-mysql-1.0.0.Final.jar:1.0.0.Final]
at io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:536) ~[debezium-connector-mysql-1.0.0.Final.jar:1.0.0.Final]
at com.github.shyiko.mysql.binlog.BinaryLogClient.notifyEventListeners(BinaryLogClient.java:1158) ~[mysql-binlog-connector-java-0.21.0.jar:0.21.0]
at com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:1005) ~[mysql-binlog-connector-java-0.21.0.jar:0.21.0]
at com.github.shyiko.mysql.binlog.BinaryLogClient.connectWithTimeout(BinaryLogClient.java:517) ~[mysql-binlog-connector-java-0.21.0.jar:0.21.0]
at com.github.shyiko.mysql.binlog.BinaryLogClient.access$1100(BinaryLogClient.java:90) ~[mysql-binlog-connector-java-0.21.0.jar:0.21.0]
at com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:881) ~[mysql-binlog-connector-java-0.21.0.jar:0.21.0]
at java.base/java.lang.Thread.run(Thread.java:829) ~[na:na]
Caused by: java.lang.RuntimeException: com.github.shyiko.mysql.binlog.event.deserialization.EventDataDeserializationException: Failed to deserialize data of EventHeaderV4{timestamp=1626941231000, eventType=EXT_WRITE_ROWS, serverId=2001, headerLength=19, dataLength=8090, nextPosition=82618497, flags=0}
at io.debezium.connector.mysql.BinlogReader.handleServerIncident(BinlogReader.java:604) ~[debezium-connector-mysql-1.0.0.Final.jar:1.0.0.Final]
at io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:519) ~[debezium-connector-mysql-1.0.0.Final.jar:1.0.0.Final]
... 6 common frames omitted
Caused by: com.github.shyiko.mysql.binlog.event.deserialization.EventDataDeserializationException: Failed to deserialize data of EventHeaderV4{timestamp=1626941231000, eventType=EXT_WRITE_ROWS, serverId=2001, headerLength=19, dataLength=8090, nextPosition=82618497, flags=0}
at com.github.shyiko.mysql.binlog.event.deserialization.EventDeserializer.deserializeEventData(EventDeserializer.java:300) ~[mysql-binlog-connector-java-0.21.0.jar:0.21.0]
at com.github.shyiko.mysql.binlog.event.deserialization.EventDeserializer.nextEvent(EventDeserializer.java:223) ~[mysql-binlog-connector-java-0.21.0.jar:0.21.0]
at io.debezium.connector.mysql.BinlogReader$1.nextEvent(BinlogReader.java:239) ~[debezium-connector-mysql-1.0.0.Final.jar:1.0.0.Final]
at com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:984) ~[mysql-binlog-connector-java-0.21.0.jar:0.21.0]
... 4 common frames omitted
Caused by: java.io.EOFException: null
at com.github.shyiko.mysql.binlog.io.ByteArrayInputStream.read(ByteArrayInputStream.java:190) ~[mysql-binlog-connector-java-0.21.0.jar:0.21.0]
at java.base/java.io.InputStream.read(InputStream.java:271) ~[na:na]
at java.base/java.io.InputStream.skip(InputStream.java:531) ~[na:na]
at com.github.shyiko.mysql.binlog.io.ByteArrayInputStream.skipToTheEndOfTheBlock(ByteArrayInputStream.java:216) ~[mysql-binlog-connector-java-0.21.0.jar:0.21.0]
at com.github.shyiko.mysql.binlog.event.deserialization.EventDeserializer.deserializeEventData(EventDeserializer.java:296) ~[mysql-binlog-connector-java-0.21.0.jar:0.21.0]
... 7 common frames omitted
2021-07-22 10:05:10.150 INFO 24570 --- [debyte.com:3306] i.debezium.connector.mysql.BinlogReader : Error processing binlog event, and propagating to Kafka Connect so it stops this connector. Future binlog events read before connector is shutdown will be ignored.
2021-07-22 10:05:10.152 INFO 24570 --- [debyte.com:3306] i.debezium.connector.mysql.BinlogReader : Stopped reading binlog after 167936 events, last recorded offset: {ts_sec=1626941227, file=mysql-bin.000532, pos=71066122, row=272, server_id=2001, event=518}
2021-07-22 10:05:37.521 INFO 24570 --- [pool-1-thread-1] i.d.connector.mysql.MySqlConnectorTask : Stopping MySQL connector task
This also is logged on application startup.
The suggested mysqlbinlog command:
mysqlbinlog --start-position=82610388 --stop-position=82618497 --verbose mysql-bin.000532
Returns
/*!50530 SET ##SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!50003 SET #OLD_COMPLETION_TYPE=##COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
# at 4
#210722 8:06:26 server id 2001 end_log_pos 123 CRC32 0xc7bff6dc Start: binlog v 4, server v 5.7.19-log created 210722 8:06:26
BINLOG '
Aif5YA/RBwAAdwAAAHsAAAAAAAQANS43LjE5LWxvZwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAEzgNAAgAEgAEBAQEEgAAXwAEGggAAAAICAgCAAAACgoKKioAEjQA
Adz2v8c=
'/*!*/;
# at 82610388
#210722 8:07:11 server id 2001 end_log_pos 82618497 CRC32 0xd2561ccc Write_rows: table id 43469
WARNING: The range of printed events ends with a row event or a table map event that does not have the STMT_END_F flag set. This might be because the last statement was not fully written to the log, or because you are using a --stop-position or --stop-datetime that refers to an event in the middle of a statement. The event(s) from the partial statement have not been written to output.
SET ##SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
# End of log file
/*!50003 SET COMPLETION_TYPE=#OLD_COMPLETION_TYPE*/;
/*!50530 SET ##SESSION.PSEUDO_SLAVE_MODE=0*/;
I have no idea why this is happening or how to code around it. The error is not thrown up to the app.
Has anyone else had a similar issue?

I found a solution on debezium's doc:
What is causing intermittent EventDataDeserializationExceptions with the MySQL connector?
When you run into intermittent deserialization exceptions around 1 minute after starting connector, with a root cause of type EOFException or java.net.SocketException: Connection reset:
Caused by: com.github.shyiko.mysql.binlog.event.deserialization.EventDataDeserializati>onException: Failed to deserialize data of EventHeaderV4{timestamp=1542193955000, eventType=GTID, serverId=91111, headerLength=19, dataLength=46, nextPosition=1058898202, flags=0}
Caused by: java.lang.RuntimeException: com.github.shyiko.mysql.binlog.event.deserialization.EventDataDeserializationException: Failed to deserialize data of EventHeaderV4{timestamp=1542193955000, eventType=GTID, serverId=91111, headerLength=19, dataLength=46, nextPosition=1058898202, flags=0}
Caused by: java.io.EOFException
or
Caused by: java.net.SocketException: Connection reset
Then updating these MySQL server global properties like this will fix it:
set global slave_net_timeout = 120; (default was 30sec)
set global thread_pool_idle_timeout = 120;

Related

Atomikos transaction timeout when insert into huge table

I'm using atomikos to handle global transaction between 2 different databases. I ran before with more than 1 billion records and this problem just appeared couple days ago. I guess the reason is my table increase the size so it catch atomikos config timeout.
Can anyone help me please!
For more detail, this is the log that I received:
{"timestamp":"2022-11-29 18:16:09.691","level":"INFO","thread":"[BagInsertionScheduler_Worker-1] ","logger":"insert.job.BagInsertQuartzJob","msg":">>>>> START JOB: bagInsertingJob - CMJobId -28979"}
{"timestamp":"2022-11-29 18:16:09.726","level":"INFO","thread":"[BagInsertionScheduler_Worker-1] ","logger":"o.s.b.c.l.support.SimpleJobLauncher","msg":"Job: [SimpleJob: [name=bagInsertingJob]] launched with the following parameters: [{Time=1669720569691, JobCMId=-28979}]"}
{"timestamp":"2022-11-29 18:16:09.728","level":"INFO","thread":"[BagInsertionScheduler_Worker-1] ","logger":"i.j.l.BagInsertPartitionerAndJobListener","msg":">>> Just Picked a Job: -28979"}
{"timestamp":"2022-11-29 18:16:09.732","level":"INFO","thread":"[BagInsertionScheduler_Worker-1] ","logger":"o.s.batch.core.job.SimpleStepHandler","msg":"Executing step: [bagInsertMasterStep]"}
{"timestamp":"2022-11-29 18:16:09.787","level":"INFO","thread":"[BagInsertionScheduler_Worker-1] ","logger":"common.util.PartitionUtil","msg":">>>> Partition: 0- Start: 0 - End: 10"}
{"timestamp":"2022-11-29 18:16:09.787","level":"INFO","thread":"[BagInsertionScheduler_Worker-1] ","logger":"common.util.PartitionUtil","msg":">>>> Partition: 1- Start: 11 - End: 21"}
{"timestamp":"2022-11-29 18:16:09.787","level":"INFO","thread":"[BagInsertionScheduler_Worker-1] ","logger":"common.util.PartitionUtil","msg":">>>> Partition: 2- Start: 22 - End: 32"}
{"timestamp":"2022-11-29 18:16:09.788","level":"INFO","thread":"[BagInsertionScheduler_Worker-1] ","logger":"common.util.PartitionUtil","msg":">>>> Partition: 3- Start: 33 - End: 39"}
{"timestamp":"2022-11-29 18:16:19.866","level":"WARN","thread":"[Atomikos:6] ","logger":"c.a.icatch.imp.ActiveStateHandler","msg":"Transaction 10.234.73.199.tm166972056980900063 has timed out and will rollback."}
{"timestamp":"2022-11-29 18:16:19.866","level":"WARN","thread":"[Atomikos:7] ","logger":"c.a.icatch.imp.ActiveStateHandler","msg":"Transaction 10.234.73.199.tm166972056981000065 has timed out and will rollback."}
{"timestamp":"2022-11-29 18:16:19.866","level":"WARN","thread":"[Atomikos:5] ","logger":"c.a.icatch.imp.ActiveStateHandler","msg":"Transaction 10.234.73.199.tm166972056980900064 has timed out and will rollback."}
{"timestamp":"2022-11-29 18:16:19.866","level":"WARN","thread":"[Atomikos:4] ","logger":"c.a.icatch.imp.ActiveStateHandler","msg":"Transaction 10.234.73.199.tm166972056980900062 has timed out and will rollback."}
{"timestamp":"2022-11-29 18:17:11.209","level":"ERROR","thread":"[SimpleAsyncTaskExecutor-43] ","logger":"o.s.batch.core.step.AbstractStep","msg":"Encountered an error executing step bagInsertingSlaveStep in job bagInsertingJob"}
org.springframework.transaction.UnexpectedRollbackException: JTA transaction unexpectedly rolled back (maybe due to a timeout); nested exception is javax.transaction.RollbackException: One or more resources refused to commit (possibly because of a timeout in the resource - see the log for details). This transaction has been rolled back instead.
at org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1038)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:743)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:711)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:654)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:407)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:119)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:763)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:708)
at insert.job.step.BagInsertUpdater$$EnhancerBySpringCGLIB$$d6e05d87.write(<generated>)
at insert.job.step.BagInsertUpdater$$FastClassBySpringCGLIB$$e835cd44.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:793)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:763)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:137)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:124)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:763)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:708)
at insert.job.step.BagInsertUpdater$$EnhancerBySpringCGLIB$$efdca957.write(<generated>)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.writeItems(SimpleChunkProcessor.java:193)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.doWrite(SimpleChunkProcessor.java:159)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.write(SimpleChunkProcessor.java:294)
at org.springframework.batch.core.step.item.SimpleChunkProcessor.process(SimpleChunkProcessor.java:217)
at org.springframework.batch.core.step.item.ChunkOrientedTasklet.execute(ChunkOrientedTasklet.java:77)
at org.springframework.batch.core.step.tasklet.TaskletStep$ChunkTransactionCallback.doInTransaction(TaskletStep.java:407)
at org.springframework.batch.core.step.tasklet.TaskletStep$ChunkTransactionCallback.doInTransaction(TaskletStep.java:331)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:140)
at org.springframework.batch.core.step.tasklet.TaskletStep$2.doInChunkContext(TaskletStep.java:273)
at org.springframework.batch.core.scope.context.StepContextRepeatCallback.doInIteration(StepContextRepeatCallback.java:82)
at org.springframework.batch.repeat.support.RepeatTemplate.getNextResult(RepeatTemplate.java:375)
at org.springframework.batch.repeat.support.RepeatTemplate.executeInternal(RepeatTemplate.java:215)
at org.springframework.batch.repeat.support.RepeatTemplate.iterate(RepeatTemplate.java:145)
at org.springframework.batch.core.step.tasklet.TaskletStep.doExecute(TaskletStep.java:258)
at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:208)
at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:138)
at org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler$1.call(TaskExecutorPartitionHandler.java:135)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:831)
Caused by: javax.transaction.RollbackException: One or more resources refused to commit (possibly because of a timeout in the resource - see the log for details). This transaction has been rolled back instead.
at com.atomikos.icatch.jta.TransactionImp.rethrowAsJtaRollbackException(TransactionImp.java:48)
at com.atomikos.icatch.jta.TransactionImp.commit(TransactionImp.java:188)
at com.atomikos.icatch.jta.TransactionManagerImp.commit(TransactionManagerImp.java:414)
at com.atomikos.icatch.jta.UserTransactionImp.commit(UserTransactionImp.java:86)
at org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1035)
... 39 common frames omitted
Caused by: com.atomikos.icatch.RollbackException: One or more resources refused to commit (possibly because of a timeout in the resource - see the log for details). This transaction has been rolled back instead.
at com.atomikos.icatch.imp.ActiveStateHandler.prepare(ActiveStateHandler.java:202)
at com.atomikos.icatch.imp.CoordinatorImp.prepare(CoordinatorImp.java:523)
at com.atomikos.icatch.imp.CoordinatorImp.terminate(CoordinatorImp.java:687)
at com.atomikos.icatch.imp.CompositeTransactionImp.commit(CompositeTransactionImp.java:282)
at com.atomikos.icatch.jta.TransactionImp.commit(TransactionImp.java:172)
... 42 common frames omitted
{"timestamp":"2022-11-29 18:17:11.212","level":"INFO","thread":"[SimpleAsyncTaskExecutor-43] ","logger":"o.s.batch.core.step.AbstractStep","msg":"Step: [bagInsertingSlaveStep:partition3] executed in 1m1s410ms"}
I try to expand the config timeout from 1000 to 10000, but it does not work.
Hope that anyone can explain what's happened and how can i fix it.

Hibernate auto table generation from domain class is not working in Grails with MySQL

All the domain classes are being mapped as tables in MySQL instead of one domain class named USER (Name of the table is changed to db_user in mapping to avoid keyword violations) as shown below:
And application keeps showing error on startup :
Caused by: java.sql.SQLSyntaxErrorException: Table 'umm.db_user' doesn't exist
Versions of stack:
| Grails Version: 3.2.3
| Groovy Version: 2.4.7
| JVM Version: 1.8.0_242
| Gradle: 3.0
| MySQL: 8.0.26-cluster
| Hibernate: 5.1.1.Final
application.groovy
application.yml
Logs:
2021-09-02 02:43:09.722 ERROR 26370 --- [ main] o.h.engine.jdbc.spi.SqlExceptionHelper : Table 'umm.db_user' doesn't exist
2021-09-02 02:43:09.783 ERROR 26370 --- [ main] grails.app.init.umm1.BootStrap : Exception occured in bootstrap:
org.hibernate.exception.SQLGrammarException: could not extract ResultSet
at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:63)
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:111)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:97)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:79)
at org.hibernate.loader.Loader.getResultSet(Loader.java:2122)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1905)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1881)
at org.hibernate.loader.Loader.doQuery(Loader.java:925)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:342)
at org.hibernate.loader.Loader.doList(Loader.java:2622)
at org.hibernate.loader.Loader.doList(Loader.java:2605)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2434)
at org.hibernate.loader.Loader.list(Loader.java:2429)
at org.hibernate.loader.criteria.CriteriaLoader.list(CriteriaLoader.java:109)
at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1787)
at org.hibernate.internal.CriteriaImpl.list(CriteriaImpl.java:363)
at org.grails.orm.hibernate.query.AbstractHibernateQuery.singleResultViaListCall(AbstractHibernateQuery.java:785)
at org.grails.orm.hibernate.query.AbstractHibernateQuery.singleResult(AbstractHibernateQuery.java:775)
Caused by: java.sql.SQLSyntaxErrorException: Table 'umm.db_user' doesn't exist
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:970) at com.mysql.cj.jdbc.ClientPreparedStatement.executeQuery(ClientPreparedStatement.java:1020)
at sun.reflect.GeneratedMethodAccessor187.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498)
org.springsource.loaded.ri.ReflectiveInterceptor.jlrMethodInvoke(ReflectiveInterceptor.java:1426) org.apache.tomcat.jdbc.pool.interceptor.StatementDecoratorInterceptor$StatementProxy.invoke(StatementDecoratorInterceptor.java:261)
at com.sun.proxy.$Proxy102.executeQuery(Unknown Source) org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:70)
... 134 common frames omitted
Grails application running at http://localhost:9091/umm in environment: development
Do three things,
Change dialect to
org.hibernate.dialect.MySQL8Dialect
and/update mysql driver dependency
mysql:mysql-connector-java:8.0.17
Make sure you are using the correct type in configs (Holders.config.ProfilePictureType) (longtext, maybe)

MySQL Sink Connector - JsonConverter - DataException: Unknown schema type: null

I have created MySQL sink connector and successfully run and i see the logs and got 200 response, But the sink connector not able to push the data into mysql db (data is available in the topic)
{
"name":"mysql-sink-connector",
"config":{ "tasks.max":"2",
"batch.size":"1000",
"batch.max.rows":"1000",
"poll.interval.ms":"500",
"connector.class":"io.confluent.connect.jdbc.JdbcSinkConnector",
"connection.url":"jdbc:mysql://mysql.azure.com:3306/db_test_dev",
"table.name.format":"tbl_clients_merchants",
"topics":"createorder",
"connection.user":"user",
"connection.password":"password",
"auto.create":"true",
"auto.evolve":"true",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable":"true",
"key.converter":"org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable":"true"
}}
getting an error
[2021-07-29 09:41:03,157] ERROR WorkerSinkTask{id=jdbc-mysql-sink-connector-1} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:177) org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error:
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:344)
at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.connect.errors.DataException: Unknown schema type: null
at org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:743
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162) ... 13 more
[2021-07-29 13:26:11,347] ERROR WorkerSinkTask{id=mysql-sink-connector-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:178)
As the error suggests, you have null tombstone records in the topic, which cannot be mapped to any MySQL columns since there are no field names.
Option 1) Fix your producer to not send these messages
Option 2) Filter them in Connect
For example,
"transforms": "Filter",
"transforms.Filter.type": "org.apache.kafka.connect.transforms.Filter",
"transforms.Filter.predicate" : "DropNull",
"predicates" : "DropNull",
"predicates.DropNull.type": "org.apache.kafka.connect.transforms.predicates.RecordIsTombstone"

HikariCP trying to obtain connection to pool with different user id than provided

I have a Java app (running Java 11) using HikariCP v3.3.1, with MySQL Connector/J version 8.0.17. I have specified username = 'foo' and password = 'password' and Hikari opens up a connection to the DB with these credentials. This all works.
There is a problem that I see in my logs, however. I have debugging enabled for Hikari and I see stack traces like so:
2019-09-10 01:35:13.452 DEBUG 4941 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Cannot acquire connection from data source
java.sql.SQLNonTransientConnectionException: Could not create connection to database server. Attempted reconnect 10 times. Giving up.
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:110) ~[mysql-connector-java-8.0.17.jar!/:8.0.17]
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97) ~[mysql-connector-java-8.0.17.jar!/:8.0.17]
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:89) ~[mysql-connector-java-8.0.17.jar!/:8.0.17]
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:63) ~[mysql-connector-java-8.0.17.jar!/:8.0.17]
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:73) ~[mysql-connector-java-8.0.17.jar!/:8.0.17]
at com.mysql.cj.jdbc.ConnectionImpl.connectWithRetries(ConnectionImpl.java:897) ~[mysql-connector-java-8.0.17.jar!/:8.0.17]
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:822) ~[mysql-connector-java-8.0.17.jar!/:8.0.17]
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:447) ~[mysql-connector-java-8.0.17.jar!/:8.0.17]
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:237) ~[mysql-connector-java-8.0.17.jar!/:8.0.17]
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:199) ~[mysql-connector-java-8.0.17.jar!/:8.0.17]
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-3.3.1.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:353) ~[HikariCP-3.3.1.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:201) ~[HikariCP-3.3.1.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:473) ~[HikariCP-3.3.1.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.access$100(HikariPool.java:71) ~[HikariCP-3.3.1.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:727) ~[HikariCP-3.3.1.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:713) ~[HikariCP-3.3.1.jar!/:na]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
Caused by: com.mysql.cj.exceptions.CJException: Access denied for user 'administrator'#'localhost' (using password: YES)
It looks like it is trying to open a connection in the pool with user = 'administrator'. This naturally fails - because there is no such user in the database.
My question is this: why did Hikari open up one connection successfully to the db with user = 'foo' and password = 'password' but then try to open another connection with a different user = 'administrator'? 'administrator' by the way is the name of user that is running the Java process.
Here is the datasource bean. No other data source bean defined, I think.
#Bean
#ConfigurationProperties
public DataSource myDataSource() {
HikariConfig config = new HikariConfig();
config.setJdbcUrl(dburl);
config.setUsername(dbuser);
config.setPassword(password);
return new HikariDataSource(config);
}
The datasource - as I mentioned does get created with the correct username and password.
We get the values of the various attributes dburl and dbuser from #Value declarations that pull values from the properties files as follows:
#Value("${myorg.dburl}"
String dburl;
#Value("${myorg.dbusername"}"
String dbuser;
String password; //this comes from cloud secret store per org policy
The application properties has the following relevant lines
myorg.dbhost=localhost
myorg.dburl=jdbc:mysql://${myorg.dbhost}:3306/jobs?useSSL=false&autoReconnect=true&failOverReadOnly=false&maxReconnects=10&useServerPrepStmts=false&rewriteBatchedStatements=true&useUnicode=true&characterEncoding=UTF-8
myorg.dbusername=axcode
Thanks in advance!
Updated with new user experiment. So it gets even more curious. I created a new user on my Linux server 'foouser' - and ran the identical spring boot fat jar app. And, it works correctly with no exceptions thrown. Here is what I see from the periodic Hikari logs:
2019-09-12 12:30:46.685 DEBUG 3280 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=0, idle=10, waiting=0)
2019-09-12 12:31:16.685 DEBUG 3280 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=10, active=0, idle=10, waiting=0)
But when I run same app as 'administrator', then I see only 1 connection made to the db and an exception thrown when Hikari tries to open a second connection this time as user 'administrator'
2019-09-12 12:37:31.195 DEBUG 3514 --- [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Pool stats (total=1, active=0, idle=1, waiting=0)
2019-09-12 12:37:46.091 DEBUG 3514 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Cannot acquire connection from data source

How to configure rsyslog template for Exception error for remote logging?

I'm using rsyslog to ship logs to a remote Logstash server, and the Logstash on that service expects input data in a json format. How can I configure an rsyslog template to json-ify a exception. For example, I want to send the following exception as a single message.
2017-02-08 21:59:51,727 ERROR :localhost-startStop-1 [jdbc.sqlonly] 1. PreparedStatement.executeBatch() batching 1 statements:
1: insert into CR_CLUSTER_REGISTRY (Cluster_Name, Url, Update_Dttm, Node_Id) values ('customer', 'rmi://ip-10-53-123.123.eu-west-1.compute.internal:1199/2', '02/08/2017 21:59:51.639', '2')
java.sql.BatchUpdateException: [Teradata JDBC Driver] [TeraJDBC 15.00.00.35] [Error 1338] [SQLState HY000] A failure occurred while executing a PreparedStatement batch request. Details of the failure can be found in the exception chain that is accessible with getNextException.
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeBatchUpdateException(ErrorFactory.java:148)
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeBatchUpdateException(ErrorFactory.java:137)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatchDMLArray(TDPreparedStatement.java:272)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatch(TDPreparedStatement.java:2584)
at com.teradata.tal.qes.StatementProxy.executeBatch(StatementProxy.java:186)
at net.sf.log4jdbc.StatementSpy.executeBatch(StatementSpy.java:539)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:266)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1028)
at com.teradata.tal.common.persistence.dao.SessionWrapper.flush(SessionWrapper.java:920)
at com.teradata.trm.common.persistence.dao.DaoImpl.save(DaoImpl.java:263)
at com.teradata.trm.common.service.AbstractService.save(AbstractService.java:509)
at com.teradata.trm.common.cluster.Cluster.init(Cluster.java:413)
at com.teradata.trm.common.cluster.NodeConfiguration.initialize(NodeConfiguration.java:182)
at com.teradata.trm.common.context.Initializer.onApplicationEvent(Initializer.java:73)
at com.teradata.trm.common.context.Initializer.onApplicationEvent(Initializer.java:30)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:97)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:324)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:929)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:467)
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:385)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:284)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:111)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4973)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5467)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:632)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1247)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1898)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: [Teradata Database] [TeraJDBC 15.00.00.35] [Error -2801] [SQLState 23000] Duplicate unique prime key error in CIM_META.CR_CLUSTER_REGISTRY.
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDatabaseSQLException(ErrorFactory.java:301)
at com.teradata.jdbc.jdbc_4.statemachine.ReceiveInitSubState.action(ReceiveInitSubState.java:114)
at com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.subStateMachine(StatementReceiveState.java:311)
at com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.action(StatementReceiveState.java:200)
at com.teradata.jdbc.jdbc_4.statemachine.StatementController.runBody(StatementController.java:137)
at com.teradata.jdbc.jdbc_4.statemachine.PreparedBatchStatementController.run(PreparedBatchStatementController.java:58)
at com.teradata.jdbc.jdbc_4.TDStatement.executeStatement(TDStatement.java:387)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatchDMLArray(TDPreparedStatement.java:252)
... 37 more
I have the following rsyslog configuration file. The startmsg.regex aims to "flag" the start of a new message when it sees the "YYYY-mm-dd" date format, and until it sees that format, it should treat any text following the date format as part of the current message.
input(type="imfile"
File="/usr/share/tomcat/dist/logs/trm-error.log*"
Facility="local3"
Tag="trm-error:"
Severity="error"
startmsg.regex="^[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}"
escapeLF="on"
)
if $programname == 'trm-error:' then {
action(
type="omfwd"
Target="10.53.234.234"
Port="5514"
Protocol="udp"
template="textLogTemplate"
)
stop
}
..and the following template.
# Template for non json logs, just sends the message wholesale with extra
# # furniture.
template(name="textLogTemplate" type="list") {
constant(value="{ ")
constant(value="\"type\":\"")
property(name="programname")
constant(value="\", ")
constant(value="\"host\":\"")
property(name="hostname")
constant(value="\", ")
constant(value="\"timestamp\":\"")
property(name="timestamp" dateFormat="rfc3339")
constant(value="\", ")
constant(value="\"#version\":\"1\", ")
constant(value="\"customer\":\"customer\", ")
constant(value="\"role\":\"app2\", ")
constant(value="\"sourcefile\":\"")
property(name="$!metadata!filename")
constant(value="\", ")
constant(value="\"message\":\"")
property(name="rawmsg" format="json")
constant(value="\"}\n")
}
However, Logstash complains about a "jsonparseerror" when it tries to parse the log as a json file. Any clues?
The rsyslog configuration files I'm using are correct, that is, Java exception log is indeed wrapped into a valid JSON file. However, Logstash is complaining about a _jsonparsefailure, so this problem is probably related to Logstash Ruby code, and not on the rsyslog side.