I had my debezium mysql source connector working on Kafka. I added another debezium mysql source connector using the same database but with different data formats. As a result, my first connector started showing the following error :
[2019-07-11 10:29:09,125] ERROR WorkerSourceTask{id=debezium-connector-0} Task threw an uncaught and unrecoverable exception
(org.apache.kafka.connect.runtime.WorkerTask:177)
org.apache.kafka.connect.errors.ConnectException: Encountered change event for table db.user whose schema isn't known to this connector
at io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:230)
at io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:208)
at io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:508)
at com.github.shyiko.mysql.binlog.BinaryLogClient.notifyEventListeners(BinaryLogClient.java:1095)
at com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:943)
at com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:580)
at com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:825)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.ConnectException: Encountered change event for table db.user whose schema isn't known to this connector
at io.debezium.connector.mysql.BinlogReader.informAboutUnknownTableIfRequired(BinlogReader.java:758)
at io.debezium.connector.mysql.BinlogReader.handleUpdateTableMetadata(BinlogReader.java:733)
at io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:492)
... 5 more
[2019-07-11 10:29:09,125] ERROR WorkerSourceTask{id=debezium-
connector-krazybee-0} Task is being killed and will not recover
until manually restarted
(org.apache.kafka.connect.runtime.WorkerTask:178)
[2019-07-11 10:29:09,125] INFO Stopping MySQL connector task
(io.debezium.connector.mysql.MySqlConnectorTask:430)
[2019-07-11 10:29:09,125] INFO ChainedReader: Stopping the binlog
reader (io.debezium.connector.mysql.ChainedReader:121)
[2019-07-11 10:29:09,126] INFO Discarding 0 unsent record(s) due
to the connector shutting down
(io.debezium.connector.mysql.BinlogReader:129)
[2019-07-11 10:29:09,126] INFO Discarding 0 unsent record(s) due to the connector shutting down (io.debezium.connector.mysql.BinlogReader:129)
I have restarted the debezium connector using REST API.
Though I understood to the best of my knowledge that the debezium connector is having a mismatch in database history schema, but unable to figure out how to correct it without deleting the existing connector.
I also reloaded the existing connector with previous values using PUT request but of no use.
I believe you are using the same database.history.kafka.topic for both connectors. You should use unqiue topic for each instance.
Related
When I check the status of my debezium connector via the kakfa-connect's REST API, I see this error message for the connector:
org.apache.kafka.connect.errors.ConnectException: The slave is
connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the
master has purged binary logs containing GTIDs that the slave
requires. Error code: 1236; SQLSTATE: HY000.\n\tat
io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:230)\n\tat
io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:197)\n\tat
io.debezium.connector.mysql.BinlogReader$ReaderThreadLifecycleListener.onCommunicationFailure(BinlogReader.java:997)\n\tat
com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:950)\n\tat
com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:580)\n\tat
com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:825)\n\tat
java.lang.Thread.run(Thread.java:748)\nCaused by:
com.github.shyiko.mysql.binlog.network.ServerException: The slave is
connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the
master has purged binary logs containing GTIDs that the slave
requires.\n\tat
com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:914)\n\t...
3 more\n
Is this an issue with how I am configuring my debezium connector or an issue with MySQL? Whats crazy is that even when I tried setting the option snapshot.mode to never and this error is still being thrown! According to the documentation, when snapshot.mode is set to either never or when_needed it should not require the GTID so I am super confused as to what is happening
The problem is that Debezium was probably down for some time and some of the transactions it has not seen are no longer available on the server.
That could be an issue with the wrong offsets for the connector.
So I've deleted the connector, deleted all related Kafka topics (like schema history, etc), and cleaned the offsets using the following guide https://debezium.io/documentation/faq/#how_to_remove_committed_offsets_for_a_connector
And it helped! After re-creation - the connector works as expected now.
I am trying to create multiple connector same datebase, but getting exception
org.apache.kafka.connect.errors.ConnectException: A slave with the same server_uuid/server_id as this slave has connected to the master; the first event 'mysql-bin.000004' at 1088, the last event read from './mysql-bin.000004' at 1310, the last byte read from './mysql-bin.000004' at 1310. Error code: 1236; SQLSTATE: HY000.\n\tat io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:230)
In each connector configuration, the field 'database.server.id' must be unique.
Delete each connector (except one) and recreate it with a new 'database.server.id'.
I have a running debezium setup for doing CDC from MySQL. Now I want to create one more MySQL connector for another MySQL server. But I don't want snapshot for existing data, I want to start the debezium new connector from a specific file and position.
I read some questions from stackoverflow, they told to manually insert the record to connect-offsets topic. But if I do this what will happen to my existing setup?
On a test server, I tried to set the above solution, but it was not working.
kafka-console-producer --broker-list localhost:9092 --topic connect-offsets
>{"file":"mysql-bin.000002","pos":2012}
>[2019-12-30 05:43:52,666] WARN [Producer clientId=console-producer] Got error produce response with correlation id 4 on topic-partition connect-offsets-5. Error: CORRUPT_MESSAGE (org.apache.kafka.clients.producer.internals.Sender)
[2019-12-30 05:43:52,767] WARN [Producer clientId=console-producer] Got error produce response with correlation id 5 on topic-partition connect-offsets-5, Error: CORRUPT_MESSAGE (org.apache.kafka.clients.producer.internals.Sender)
[2019-12-30 05:43:52,870] WARN [Producer clientId=console-producer] Got error produce response with correlation id 6 on topic-partition connect-offsets-5, Error: CORRUPT_MESSAGE (org.apache.kafka.clients.producer.internals.Sender)
[2019-12-30 05:43:52,975] ERROR Error when sending message to topic connect-offsets with key: null, value: 38 bytes with error: (org.apache.kafka.clients.ingCallback)
org.apache.kafka.common.errors.CorruptRecordException: This message has failed its CRC checksum, exceeds the valid size, has a null key for a compacted to.
Im not sure how to achieve this. Can somebody help me on this?
Connect-offsets topic records are used for connectors offset management, For Debezium MySQL connector the records in that topic has the key which contains the connector name and the MySQL server name which you have configured under your connector configurations.
So for this, you will require that key as well if you want to produce record in that topic only then the Debezium connector will be able to read those offsets.
I am trying to generateChangeLog for a db on a percona server, i get the below error when i try to do so.
Starting Liquibase at Wed, 05 Dec 2018 22:34:37 EST (version 3.6.2 built at
2018-07-03 11:28:09)
Unexpected error running Liquibase: liquibase.exception.DatabaseException:
liquibase.exception.UnexpectedLiquibaseException: Error during testing for
MySQL/MariaDB JDBC driver bug: could not retrieve JDBC metadata information
for temporary table 'TMP_XDBOCVCKWHSQYXKP'
liquibase.exception.LiquibaseException:
liquibase.command.CommandExecutionException:
liquibase.exception.DatabaseException:
liquibase.exception.UnexpectedLiquibaseException: Error during testing for
MySQL/MariaDB JDBC driver bug: could not retrieve JDBC metadata information
for temporary table 'TMP_XDBOCVCKWHSQYXKP'
at liquibase.integration.commandline.Main.doMigration(Main.java:1043)
at liquibase.integration.commandline.Main.run(Main.java:191)
at liquibase.integration.commandline.Main.main(Main.java:129)
Caused by: liquibase.command.CommandExecutionException:
liquibase.exception.DatabaseException:
liquibase.exception.UnexpectedLiquibaseException: Error during testing
for MySQL/MariaDB JDBC driver bug: could not retrieve JDBC metadata
information for temporary table 'TMP_XDBOCVCKWHSQYXKP'
I am trying to use it via command line using below statement
liquibase --driver=com.mysql.cj.jdbc.Driver --classpath=C:/liquibase-3.6.2-
bin/jars/mysql-connector-java-8.0.13.jar --changeLogFile=db.changelog-
1.0.xml --url="jdbc:mysql://REMOTE_SERVER_IP:3306/DB_NAME?
autoReconnect=true" --username=USER_NAME --password=PASSWORD --logLevel=info
generateChangeLog
Has anyone encountered this issue before? I tried using the old and new JDBC drivers but did not help.
I hit the same issue. Setting the logLevel to debug you could see that Liquibase was creating and dropping a temporary table. That temporary table seemed to cause the issue.
I was able to work-around this issue by using MySQL Workbench to copy the schema to a regular MySQL instance then running generateChangeLog from there.
I have the same problem
Pentaho Initialization Exception
The following errors were detected One or more system listeners failed.
These are set in the systemListeners.xml.
org.pentaho.platform.api.engine.PentahoSystemException:
PentahoSystem.ERROR_0014 - Error while trying to execute startup
sequence for org.pentaho.platform.scheduler2.
quartz.EmbeddedQuartzSystemListener
Please go through the logs :
The Catalina log https://www.dropbox.com/s/knpuu6nazwa8p0g/catalina.out?dl=0
The Pentaho log file https://www.dropbox.com/s/fz99afs9ov0pnfs/pentaho.log?dl=0
Followed tutorial : https://interestingittips.wordpress.com/2011/05/05/complete-pentaho-installation-on-ubuntu-part-2/
Please help me! Thanks in advance
By going through your logs, both catalina.out and pentaho log have the same issue written. I think you have to check your DB connections.
Failure occured during job recovery. [See nested exception:
org.quartz.JobPersistenceException: Failed to obtain DB connection
from data source 'myDS': java.sql.SQLException: There is no DataSource
named 'myDS' [See nested exception: java.sql.SQLException: There is no
DataSource named 'myDS']]