I am running source kafka connector but unfortunately i am getting below error:
{"name":"supplier-central","connector":{"state":"RUNNING","worker_id":"192.168.208.4:8083"},"tasks":[{"id":0,"state":"FAILED","worker_id":"192.168.208.4:8083","trace":"org.apache.kafka.connect.errors.ConnectException: extraneous input 'ASC' expecting {<EOF>, '--'}\n\tat io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:230)\n\tat io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:208)\n\tat io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:508)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.notifyEventListeners(BinaryLogClient.java:1095)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:943)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:580)\n\tat com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:825)\n\tat java.lang.Thread.run(Thread.java:748)\nCaused by: io.debezium.text.ParsingException: extraneous input 'ASC' expecting {<EOF>, '--'}\n\tat io.debezium.antlr.ParsingErrorListener.syntaxError(ParsingErrorListener.java:40)\n\tat org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:41)\n\tat org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:544)\n\tat org.antlr.v4.runtime.DefaultErrorStrategy.reportUnwantedToken(DefaultErrorStrategy.java:349)\n\tat org.antlr.v4.runtime.DefaultErrorStrategy.singleTokenDeletion(DefaultErrorStrategy.java:513)\n\tat org.antlr.v4.runtime.DefaultErrorStrategy.sync(DefaultErrorStrategy.java:238)\n\tat io.debezium.ddl.parser.mysql.generated.MySqlParser.root(MySqlParser.java:817)\n\tat io.debezium.connector.mysql.antlr.MySqlAntlrDdlParser.parseTree(MySqlAntlrDdlParser.java:68)\n\tat io.debezium.connector.mysql.antlr.MySqlAntlrDdlParser.parseTree(MySqlAntlrDdlParser.java:41)\n\tat io.debezium.antlr.AntlrDdlParser.parse(AntlrDdlParser.java:80)\n\tat io.debezium.connector.mysql.MySqlSchema.applyDdl(MySqlSchema.java:307)\n\tat io.debezium.connector.mysql.BinlogReader.handleQueryEvent(BinlogReader.java:694)\n\tat io.debezium.connector.mysql.BinlogReader.handleEvent(BinlogReader.java:492)\n\t... 5 more\n"}],"type":"source"}**
and in debezium logs i am getting below error:
2019-08-23 05:02:40,101 INFO MySQL|data_lake|task [Consumer clientId=supplier-central-dbhistory, groupId=supplier-central-dbhistory] Member supplier-central-dbhistory-41cab001-1c64-4ab2-8869-58dca22b783c sending LeaveGroup request to coordinator kafka:9092 (id: 2147483646 rack: null) [org.apache.kafka.clients.consumer.internals.AbstractCoordinator]
Aug 23, 2019 5:02:41 AM com.github.shyiko.mysql.binlog.BinaryLogClient connect
INFO: Connected to 52.76.148.206:3306 at mysql-bin.010785/66551561 (sid:425, cid:315812)
2019-08-23 05:02:41,200 INFO || WorkerSourceTask{id=supplier-central-0} Source task finished initialization and start [org.apache.kafka.connect.runtime.WorkerSourceTask]
2019-08-23 05:02:41,841 INFO || WorkerSourceTask{id=supplier-central-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask]
2019-08-23 05:02:41,841 INFO || WorkerSourceTask{id=supplier-central-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
2019-08-23 05:02:41,841 ERROR || WorkerSourceTask{id=supplier-central-0} Task threw an uncaught and unrecoverable exception [org.apache.kafka.connect.runtime.WorkerTask]
2019-08-23 05:02:41,841 ERROR || WorkerSourceTask{id=supplier-central-0} Task is being killed and will not recover until manually restarted [org.apache.kafka.connect.runtime.WorkerTask]
2019-08-23 05:02:41,859 INFO MySQL|data_lake|task [Producer clientId=supplier-central-dbhistory] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. [org.apache.kafka.clients.producer.KafkaProducer]
I am not using schema registry and avro. source db is mysql.
My other source connector works fine. I am not able to identify error. Source db is third party db may be someone change anything in db but as per my understanding kafka connector also make changes in binlog for that. So may be this is not issue.
Can anyone tell me problem and solution for this?
connector configuration:
curl -i -X POST -H "Accept:application/json" \
-H "Content-Type:application/json" http://localhost:38083/connectors/ \
-d '{
"name": "supplier-central",
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.hostname": "localhost",
"database.port": "3306",
"database.user": "ankitg",
"snapshot.mode": "initial",
"include.schema.changes": "true",
"database.password": "abc#123",
"database.server.id": "425",
"database.server.name": "data_lake",
"database.whitelist": "supplier",
"database.history.kafka.bootstrap.servers": "kafka:9092",
"database.history.kafka.topic": "history.supplier_central",
"table.whitelist": "supplier_central.suppliers,supplier_central.supplier_business_types,supplier_central.supplier_address,supplier_central.supplier_banks,supplier_central.supplier_profile,supplier_central.supplier_documents",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter"
}
}'
I got this error when i used different database name and different table name in configuration. check your configuration database.whitelist and table.whitelist are matching with each other or configured correctly.
Related
Started the docker containers (cp-kafka-connect-base:7.0.1) and installed the self-managed connector (debezium-connector-mysql:latest).
Docker containers works good.
When I am trying to configure the connector configuration as below
{
"name": "MySqlConnectorConnector_0",
"config": {
"database.history.consumer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule",
"database.history.kafka.topic": "pageviews",
"database.history.consumer.security.protocol": "SASL_SSL",
"database.history.consumer.ssl.endpoint.identification.algorithm": "https",
"schema.history.internal.kafka.topic": "PLAIN",
"database.whitelist": "database-test",
"database.history.producer.sasl.mechanism": "PLAIN",
"database.history.producer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule",
"database.history.producer.ssl.endpoint.identification.algorithm": "https",
"database.history.producer.security.protocol": "SASL_SSL",
"database.history.kafka.bootstrap.servers": "localhost:9092",
"database.server.name": "database-test:3306",
"schema.history.internal.kafka.bootstrap.servers": "localhost:9092",
"database.history.consumer.sasl.mechanism": "PLAIN",
"name": "MySqlConnectorConnector_0",
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"tasks.max": "1",
"errors.log.enable": "true",
"errors.log.include.messages": "true",
"topic.prefix": "test2",
"database.hostname": "database-test",
"database.port": "3306",
"database.user": "admin",
"database.password": "**********",
"database.server.id": "11",
"database.ssl.mode": "disabled",
"connect.keep.alive": "true",
"include.schema.changes": "true",
"inconsistent.schema.handling.mode": "skip"
}
}
the connector creates but connection task failed
Connector log
[2023-01-24 17:25:44,827] INFO [Consumer clientId=test3-schemahistory, groupId=test3-schemahistory] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
[2023-01-24 17:25:44,827] WARN [Consumer clientId=test3-schemahistory, groupId=test3-schemahistory] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2023-01-24 17:25:44,827] WARN [Consumer clientId=test3-schemahistory, groupId=test3-schemahistory] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2023-01-24 17:25:44,831] INFO WorkerSourceTask{id=MySqlConnectorConnector_01-0} Either no records were produced by the task since the last offset commit, or every record has been filtered out by a transformation or dropped due to transformation or conversion errors. (org.apache.kafka.connect.runtime.WorkerSourceTask)
[2023-01-24 17:25:44,831] WARN Couldn't commit processed log positions with the source database due to a concurrent connector shutdown or restart (io.debezium.connector.common.BaseSourceTask)
[2023-01-24 17:25:44,873] INFO [Consumer clientId=test3-schemahistory, groupId=test3-schemahistory] Resetting generation due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2023-01-24 17:25:44,873] INFO [Consumer clientId=test3-schemahistory, groupId=test3-schemahistory] Request joining group due to: consumer pro-actively leaving the group (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2023-01-24 17:25:44,874] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics)
[2023-01-24 17:25:44,874] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics)
[2023-01-24 17:25:44,874] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics)
[2023-01-24 17:25:44,875] INFO App info kafka.consumer for test3-schemahistory unregistered (org.apache.kafka.common.utils.AppInfoParser)
[2023-01-24 17:25:44,875] ERROR WorkerSourceTask{id=MySqlConnectorConnector_01-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)
[2023-01-24 17:33:57,334] INFO [Consumer clientId=test3-schemahistory, groupId=test3-schemahistory] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
[2023-01-24 17:33:57,334] WARN [Consumer clientId=test3-schemahistory, groupId=test3-schemahistory] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Trying to add some extra config parameters, but documentation is not clear for me. Can not feagure out how to fix this and how should connector config looks like.
As the error says, schema.history.internal.kafka.bootstrap.servers needs to be a host:port list to Kafka brokers, not a Java class name.
Similarly, schema.history.internal.kafka.topic should be plain string, not necessarily a Java class.
Our origin-node.service on the master node fails with:
root#master> systemctl start origin-node.service
Job for origin-node.service failed because the control process exited with error code. See "systemctl status origin-node.service" and "journalctl -xe" for details.
root#master> systemctl status origin-node.service -l
[...]
May 05 07:17:47 master origin-node[44066]: bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 05 07:17:47 master origin-node[44066]: bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 05 07:17:47 master origin-node[44066]: certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 05 07:17:47 master origin-node[44066]: server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
So it seems that kubelet-client-current.pem and/or kubelet-server-current.pem contains an expired certificate and the service tries to create a CSR using an endpoint which is probably not yet available (because the master is down). We tried redeploying the certificates according to the OpenShift documentation Redeploying Certificates, but this fails while detecting an expired certificate:
root#master> ansible-playbook -i /etc/ansible/hosts openshift-master/redeploy-openshift-ca.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] *******************************************************************************************************************************************
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200505T042754.html or /root/cert-expiry-report.20200505T042754.json.\n"}
[...]
root#master> cat /root/cert-expiry-report.20200505T042754.json
[...]
"kubeconfigs": [
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
[...]
"summary": {
"expired": 2,
"ok": 22,
"total": 24,
"warning": 0
}
}
There is a guide for OpenShift 4.4 for Recovering from expired control plane certificates, but that does not apply for 3.11 and we did not find such a guide for our version.
Is it possible to recreate the expired certificates without a running master node for 3.11? Thanks for any help.
OpenShift Ansible: https://github.com/openshift/openshift-ansible/releases/tag/openshift-ansible-3.11.153-2
Update 2020-05-06: I also executed redeploy-certificates.yml, but it fails at the same TASK:
root#master> ansible-playbook -i /etc/ansible/hosts playbooks/redeploy-certificates.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] ******************************************************************************
Wednesday 06 May 2020 04:07:06 -0400 (0:00:00.909) 0:01:07.582 *********
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200506T040603.html or /root/cert-expiry-report.20200506T040603.json.\n"}
Update 2020-05-11: Running with -e openshift_certificate_expiry_fail_on_warn=False results in:
root#master> ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml
[...]
TASK [Wait for master API to come back online] *****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.111) 0:02:25.186 ************
skipping: [master.openshift-cluster.mydomain.com]
TASK [openshift_control_plane : restart master] ****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.257) 0:02:25.444 ************
changed: [master.openshift-cluster.mydomain.com] => (item=api)
changed: [master.openshift-cluster.mydomain.com] => (item=controllers)
RUNNING HANDLER [openshift_control_plane : verify API server] **************************************************************************************************
Monday 11 May 2020 03:48:57 -0400 (0:00:00.945) 0:02:26.389 ************
FAILED - RETRYING: verify API server (120 retries left).
FAILED - RETRYING: verify API server (119 retries left).
[...]
FAILED - RETRYING: verify API server (1 retries left).
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"attempts": 120, "changed": false, "cmd": ["curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://lb.openshift-cluster.mydomain.com:8443/healthz/ready"], "delta": "0:00:00.182367", "end": "2020-05-11 03:51:52.245644", "msg": "non-zero return code", "rc": 35, "start": "2020-05-11 03:51:52.063277", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
root#master> systemctl status origin-node.service -l
[...]
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: E0511 04:23:28.077964 109972 bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.078001 109972 bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.080555 109972 certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: F0511 04:23:28.130968 109972 server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
[...]
I have this same case in customer environment, this error is because the certified was expiry, i "cheated" changing da S.O date before the expiry date. And the origin-node service started in my masters:
systemctl status origin-node
● origin-node.service - OpenShift Node
Loaded: loaded (/etc/systemd/system/origin-node.service; enabled; vendor preset: disabled)
Active: active (running) since Sáb 2021-02-20 20:22:21 -02; 6min ago
Docs: https://github.com/openshift/origin
Main PID: 37230 (hyperkube)
Memory: 79.0M
CGroup: /system.slice/origin-node.service
└─37230 /usr/bin/hyperkube kubelet --v=2 --address=0.0.0.0 --allow-privileged=true --anonymous-auth=true --authentication-token-webhook=true --authentication-token-webhook-cache-ttl=5m --authorization-mode=Webhook --authorization-webhook-c...
Você tem mensagem de correio em /var/spool/mail/okd
The openshift_certificate_expiry role uses the openshift_certificate_expiry_fail_on_warn variable to determine if the playbook should fail when the days left are less than openshift_certificate_expiry_warning_days.
So try running the redeploy-certificates.yml with this additional variable set to "False":
ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml
I was trying to capture mysql change into kafka console consumer following this tutorial
In mysql conf file my.cnf, I have added server-id as 0 [found by this command: mysqld --verbose --help].
So, when creating connector by this:
curl -i -X POST -H "Accept:application/json" \
-H "Content-Type:application/json" http://localhost:8083/connectors/ \
-d '{
"name": "mysql-connector8",
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.hostname": "localhost",
"database.port": "3306",
"database.user": "debezium",
"database.password": "dbz",
"database.server.id": 1,
"database.server.name": "tigerhrm",
"database.history.kafka.bootstrap.servers": "localhost:9092",
"database.history.kafka.topic": "dbhistory.demo3" ,
"include.schema.changes": "true",
"tasks.max":1
}
}'
when I set server.id=0 as set in my my.cnf file, it gives error, but when change to any random number, connector created but gives a subsequent error:
[2020-02-06 11:58:25,190] INFO USE flyDB
(io.debezium.connector.mysql.SnapshotReader:803) [2020-02-06
11:58:25,191] INFO CREATE TABLE oauth_access_token ( token_id
varchar(255) DEFAULT NULL, token blob, authentication_id
varchar(255) NOT NULL, user_name varchar(255) DEFAULT NULL,
client_id varchar(255) DEFAULT NULL, authentication blob,
refresh_token varchar(255) DEFAULT NULL, PRIMARY KEY
(authentication_id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1
(io.debezium.connector.mysql.SnapshotReader:803) [2020-02-06
11:58:25,192] INFO Step 7: committing transaction
(io.debezium.connector.mysql.SnapshotReader:611) [2020-02-06
11:58:25,193] INFO Step 8: releasing global read lock to enable MySQL
writes (io.debezium.connector.mysql.SnapshotReader:625) [2020-02-06
11:58:25,193] INFO Writes to MySQL tables prevented for a total of
00:00:00.684 (io.debezium.connector.mysql.SnapshotReader:635)
[2020-02-06 11:58:25,193] ERROR Failed due to error: Aborting snapshot
due to error when last running 'UNLOCK TABLES':
com/mysql/jdbc/CharsetMapping
(io.debezium.connector.mysql.SnapshotReader:162)
org.apache.kafka.connect.errors.ConnectException:
com/mysql/jdbc/CharsetMapping at
io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:183)
at
io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:161)
at
io.debezium.connector.mysql.SnapshotReader.execute(SnapshotReader.java:665)
at java.lang.Thread.run(Thread.java:748) Caused by:
java.lang.NoClassDefFoundError: com/mysql/jdbc/CharsetMapping at
io.debezium.connector.mysql.MySqlValueConverters.charsetFor(MySqlValueConverters.java:300)
at
io.debezium.connector.mysql.MySqlValueConverters.converter(MySqlValueConverters.java:270)
at
io.debezium.relational.TableSchemaBuilder.createValueConverterFor(TableSchemaBuilder.java:330)
at
io.debezium.relational.TableSchemaBuilder.convertersForColumns(TableSchemaBuilder.java:259)
at
io.debezium.relational.TableSchemaBuilder.createKeyGenerator(TableSchemaBuilder.java:148)
at
io.debezium.relational.TableSchemaBuilder.create(TableSchemaBuilder.java:127)
at
io.debezium.connector.mysql.MySqlSchema.lambda$applyDdl$3(MySqlSchema.java:365)
at java.lang.Iterable.forEach(Iterable.java:75) at
io.debezium.connector.mysql.MySqlSchema.applyDdl(MySqlSchema.java:360)
at
io.debezium.connector.mysql.SnapshotReader.lambda$execute$9(SnapshotReader.java:422)
at io.debezium.jdbc.JdbcConnection.query(JdbcConnection.java:389) at
io.debezium.jdbc.JdbcConnection.query(JdbcConnection.java:344) at
io.debezium.connector.mysql.SnapshotReader.execute(SnapshotReader.java:420)
... 1 more
where oauth_access_token is a table in my db[there exists multiple db].
Checking connectors' status I got this:
curl -s "http://localhost:8083/connectors" | jq '.[]' | xargs -I {mysql-connector} curl -s "http://localhost:8083/connectors/mysql-connector/status" | jq -c -M '[.name,.connector.state,.tasks[].state] |
join(":|:")' | column -s : -t | tr -d \" | sort
mysql-connector | RUNNING | FAILED
mysql-connector | RUNNING | FAILED
mysql-connector | RUNNING | FAILED
mysql-connector | RUNNING | FAILED
mysql-connector | RUNNING | FAILED
mysql-connector | RUNNING | FAILED
mysql-connector | RUNNING | FAILED
mysql-connector | RUNNING | FAILED
, Last column is their task status. and hence no database change is detected in topic where topic should be created named by the tables existed in db. How to solve this ?
Add: when I delete that table from db, the error occurs for another table and goes on !All services are running on my local ubuntu machine.
. Any kind of help is much appreciated!
When I was on windows this file working well but now I must restart my project on Linux and I don't know how can I do it ? bellow see logstash.conf xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
cammand : ./logstash -f longstash.conf
input {
jdbc {
jdbc_connection_string =>"jdbc:mysql://localhost:3306/test"
jdbc_user =>"root"
jdbc_password =>"password"
jdbc_driver_library =>"/usr/share/java/mysql-connector-java-5.1.45.jar"
jdbc_driver_class =>"com.mysql.jdbc.Driver"
schedule =>"* * * * *"
statement =>"SELECT * FROM Pro WHERE last_modificate >:sql_last_value"
use_column_value =>true
tracking_column =>last_modificate
}
}
output {
elasticsearch {
hosts =>"localhost:9200"
action=>update
document_id =>"%{id}"
doc_as_upsert =>true
index =>"blog"
document_type =>"pro"
}
}
And bellow see the error :
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-02-06 16:24:38.441 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2019-02-06 16:24:38.495 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.6.0"}
[WARN ] 2019-02-06 16:24:58.845 [Converge PipelineAction::Create<main>] elasticsearch - You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost:9200], doc_as_upsert=>true, action=>"update", index=>"blog", id=>"0d9b8021264f8db7c25bca76842096f28d088e42d8e84a573b39874bc2c38c19", document_id=>"%{id}", document_type=>"pro", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_0d66fa34-7e13-432a-9405-8084af971c1a", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>false, ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[INFO ] 2019-02-06 16:24:58.963 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2019-02-06 16:25:00.378 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[WARN ] 2019-02-06 16:25:01.241 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"}
[INFO ] 2019-02-06 16:25:02.692 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6}
[WARN ] 2019-02-06 16:25:02.705 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[INFO ] 2019-02-06 16:25:02.805 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[INFO ] 2019-02-06 16:25:02.881 [Ruby-0-Thread-5: :1] elasticsearch - Using mapping template from {:path=>nil}
[INFO ] 2019-02-06 16:25:03.044 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[INFO ] 2019-02-06 16:25:03.726 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5e9c3d30 run>"}
[INFO ] 2019-02-06 16:25:03.868 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-02-06 16:25:05.345 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
Wed Feb 06 16:26:05 CET 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
Wed Feb 06 16:26:06 CET 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
[ERROR] 2019-02-06 16:26:06.396 [Ruby-0-Thread-15: :1] jdbc - Unable to connect to database. Tried 1 times {:error_message=>"Java::JavaSql::SQLException: Access denied for user 'root'#'localhost'"}
{ 2014 rufus-scheduler intercepted an error:
2014 job:
2014 Rufus::Scheduler::CronJob "* * * * *" {}
2014 error:
2014 2014
2014 Sequel::DatabaseConnectionError
2014 Java::JavaSql::SQLException: Access denied for user 'root'#'localhost'
2014 com.mysql.jdbc.SQLError.createSQLException(com/mysql/jdbc/SQLError.java:965)
2014 com.mysql.jdbc.MysqlIO.checkErrorPacket(com/mysql/jdbc/MysqlIO.java:3973)
2014 com.mysql.jdbc.MysqlIO.checkErrorPacket(com/mysql/jdbc/MysqlIO.java:3909)
2014 com.mysql.jdbc.MysqlIO.checkErrorPacket(com/mysql/jdbc/MysqlIO.java:873)
2014 com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(com/mysql/jdbc/MysqlIO.java:1710)
2014 com.mysql.jdbc.MysqlIO.doHandshake(com/mysql/jdbc/MysqlIO.java:1226)
2014 com.mysql.jdbc.ConnectionImpl.coreConnect(com/mysql/jdbc/ConnectionImpl.java:2188)
2014 com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(com/mysql/jdbc/ConnectionImpl.java:2219)
2014 com.mysql.jdbc.ConnectionImpl.createNewIO(com/mysql/jdbc/ConnectionImpl.java:2014)
2014 com.mysql.jdbc.ConnectionImpl.<init>(com/mysql/jdbc/ConnectionImpl.java:776)
2014 com.mysql.jdbc.JDBC4Connection.<init>(com/mysql/jdbc/JDBC4Connection.java:47)
2014 java.lang.reflect.Constructor.newInstance(java/lang/reflect/Constructor.java:423)
As the error is self explanatory.
2014 Java::JavaSql::SQLException: Access denied for user 'root'#'localhost'
kindly check the configuration credentials. On linux have have to go /usr/share/logstash directory then run the following commnad.
sudo bin/logstash -f /etc/logstash/conf.d/yourfilename.conf
My Proton instance fails with a java.lang.NullPointerException whenever an event is sent by Orion
this is the Proton log:
proton_1 | 01-Jul-2016 09:46:03.117 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom started event message body reader
proton_1 | 01-Jul-2016 09:46:03.125 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Event: ApeContextUpdate
proton_1 | 01-Jul-2016 09:46:03.126 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Could not parse XML NGSI event java.lang.NullPointerException, reason: null
proton_1 | last attribute name: null last value: null
proton_1 | 01-Jul-2016 09:46:03.130 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom finished event message body reader
proton_1 | 01-Jul-2016 09:46:03.131 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent starting submitNewEvent
proton_1 | 01-Jul-2016 09:46:03.132 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent Could not send event, reason: java.lang.NullPointerException, message: null
I've read the Appendix of the User guide and double checked the event name and the attributes list.
This is an xml sent by orion:
POST /ProtonOnWebServer/rest/events HTTP/1.1
User-Agent: orion/0.28.0 libcurl/7.19.7
Host: localhost:8080
Accept: application/xml, application/json
Content-length: 772
Content-type: application/xml
<notifyContextRequest>
<subscriptionId>57762eb9982959644644f9ee</subscriptionId>
<originator>localhost</originator>
<contextResponseList>
<contextElementResponse>
<contextElement>
<entityId type="Ape" isPattern="false">
<id>u1</id>
</entityId>
<contextAttributeList>
<contextAttribute>
<name>carsharing</name>
<type>urn:x-ogc:def:trs:IDAS:1.0:ISO8601</type>
<contextValue>2016-07-01T11:01:06</contextValue>
</contextAttribute>
</contextAttributeList>
</contextElement>
<statusCode>
<code>200</code>
<reasonPhrase>OK</reasonPhrase>
</statusCode>
</contextElementResponse>
</contextResponseList>
</notifyContextRequest>
This is the definition of the Proton project (BTW this is the project copied from
the server filesystem because also the rest api fails with a
NullPointerException)
{
"epn": {
"events": [
{
"name": "ApeContextUpdate",
"createdDate": "Fri Jul 01 2016",
"attributes": [
{
"name": "entityId",
"type": "String",
"dimension": "0"
},
{
"name": "entityType",
"type": "String",
"dimension": "0"
},
{
"name": "carsharing",
"type": "Date",
"dimension": "0"
}
]
}
],
"epas": [],
"contexts": {
"temporal": [],
"segmentation": [],
"composite": []
},
"consumers": [],
"producers": [],
"name": "t0"
}
}
and this is my docker-compose file:
mongo:
image: mongo:2.6
command: --smallfiles --quiet
proton:
image: fiware/proactivetechnologyonline
ports:
- "8080:8080"
orion:
image: fiware/orion:0.28
links:
- mongo
- proton
command: -dbhost mongo --silent
ports:
- "1026:1026"
I'm using Orion 0.28 (the last one that supports XML notifications) and the latest Proton
UPDATE 1 - catalina.log
07-Jul-2016 07:52:39.914 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom started event message body reader
07-Jul-2016 07:52:39.924 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Event: ApeContextUpdate
07-Jul-2016 07:52:39.924 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom Could not parse XML NGSI event java.lang.NullPointerException, reason: null
last attribute name: null last value: null
07-Jul-2016 07:52:39.928 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.providers.EventXmlNgsiMessageReader.readFrom finished event message body reader
07-Jul-2016 07:52:39.929 INFO [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent starting submitNewEvent
07-Jul-2016 07:52:39.929 SEVERE [http-nio-8080-exec-1] com.ibm.hrl.proton.webapp.resources.EventResource.submitNewEvent Could not send event, reason: java.lang.NullPointerException, message: null
The problem seems to be that your Proton instance is not actually configured with your project's JSON definition file, therefore when sending the POST of any type you will always get NullPointerException since no such event can be found in Proton's metadata.
Please try to configure your instance's admin interface, as described here:
http://proactive-technology-online.readthedocs.io/en/latest/Proton-InstallationAndAdminGuide/index.html (Setup Apache Tomcat for management part)
And then run the following query:
GET //<ip of the machine running Proton>:8080/ProtonOnWebServerAdmin/resources/definitions.
This should return the all the project definitions this instance have...
And then if you see this in the list, you can retrieve your specific project's definition by running:
GET /<ip of the machine running Proton>:8080/resources/definitions/{definition_name}.
I think this will either return nothing, or will be empty.
You can update the definitions by using RESTful interface as described here: http://forge.fiware.org/plugins/mediawiki/wiki/fiware/index.php/Complex_Event_Processing_Open_RESTful_API_Specification (under the Managing Definitions Repository part)