ActiveMQ ProtocolException Invalid CONNECT encoding - warnings

What means this warning in ActiveMQ?
jvm 1 | WARN | Transport Connection to: tcp://xx.xxx.xxx.xxx:xxxxx failed: java.net.ProtocolException: Invalid CONNECT encoding
I use ActiveMQ as broker and connect from Android with MQTT. I get every connect this warning in ActiveMQ console.

The error would indicate something is wrong with your CONNECT frame, some MQTT spec violation. It's hard to say without more info on the error. Things like clean session not being set when the clientId value is zero length etc can lead to protocol level errors.
You can enable more logs from MQTT in your log4j.properties using something like:
log4j.logger.org.apache.activemq.transport.mqtt=TRACE

Related

Google Application Credential error clod sql proxy

I have created a PostgreSQL cloud instance with public ip. I have added my home ip into whitelist.
I have installed the cloud proxy SQL like Google doc.
When I run the proxy, I get this error:
The proxy has encountered a terminal error: unable to start: failed to get instance
private key should be a PEM or plain PKCS1 or PKCS8; parse error: asn1: syntax error: sequence truncated
The error seams refer to credential json key ID client OAuth 2.0
Can you help me to understand this error and how to fix?
Thanks
I believe the private key should be a PEM or plain PKCS1 or PKCS8; parse error: asn1: syntax error: sequence truncated is only concerned with the private_key field, rather than json in which it is contained. So it seems the key text might be mangled in some way.
You can follow this documentation how Connect to your Cloud SQL instance using SSL, then you can follow this guideline how to Configure SSL/TLS Certficates, and Connect using a MYSQL client, you can follow this and try.
or
You can follow the directions here Create and manage service account keys, could you perhaps remake a keyfile following these instructions.

Kafka connect setup to send record from Aurora using AWS MSK

I have to send records from Aurora/Mysql to MSK and from there to Elastic search service
Aurora -->Kafka-connect--->AWS MSK--->kafka connect --->Elastic search
The record in Aurora table structure is something like this
I think record will go to AWS MSK in this format.
"o36347-5d17-136a-9749-Oe46464",0,"NEW_CASE","WRLDCHK","o36347-5d17-136a-9749-Oe46464","<?xml version=""1.0"" encoding=""UTF-8"" standalone=""yes""?><caseCreatedPayload><batchDetails/>","CASE",08-JUL-17 10.02.32.217000000 PM,"TIME","UTC","ON","0a348753-5d1e-17a2-9749-3345,MN4,","","0a348753-5d1e-17af-9749-FGFDGDFV","EOUHEORHOE","2454-5d17-138e-9749-setwr23424","","","",,"","",""
So in order to consume by elastic search i need to use proper schema so schema registry i have to use.
My question
Question 1
How should i use schema registry for above type of message schema registry is required ?.
Do i have to create JSON structure for this and if yes where i have keep that.
More help required here to understand this ?
I have edited
vim /usr/local/confluent/etc/schema-registry/schema-registry.properties
Mentioned zookeper but i did not what is kafkastore.topic=_schema
How to link this to custom schema .
Even i started and got this error
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic _schemas not present in metadata after 60000 ms.
Which i was expecting because i did not do anything about schema .
I do have jdbc connector installed and when i start i get below error
Invalid value java.sql.SQLException: No suitable driver found for jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123 for configuration Couldn't open connection to jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123
Invalid value java.sql.SQLException: No suitable driver found for jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123 for configuration Couldn't open connection to jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
Question 2
Can i create two onnector on one ec2 (jdbc and elastic serach one ).If yes do i have to start both in sepearte cli ?
Question 3
When i open vim /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties
I see only propeties value like below
name=test-source-sqlite-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123
mode=incrementing
incrementing.column.name=id
topic.prefix=trf-aurora-fspaudit-
In the above properties file where i can mention schema name and table name?
Based on answer i am updating my configuration for Kafka connect JDBC
---------------start JDBC connect elastic search -----------------------------
wget /usr/local http://packages.confluent.io/archive/5.2/confluent-5.2.0-2.11.tar.gz -P ~/Downloads/
tar -zxvf ~/Downloads/confluent-5.2.0-2.11.tar.gz -C ~/Downloads/
sudo mv ~/Downloads/confluent-5.2.0 /usr/local/confluent
wget https://cdn.mysql.com//Downloads/Connector-J/mysql-connector-java-5.1.48.tar.gz
tar -xzf mysql-connector-java-5.1.48.tar.gz
sudo mv mysql-connector-java-5.1.48 mv /usr/local/confluent/share/java/kafka-connect-jdbc
And then
vim /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties
Then i modified below properties
connection.url=jdbc:mysql://fdgfgdfgrter.us-east-1.rds.amazonaws.com:3306/trf
mode=incrementing
connection.user=admin
connection.password=Welcome123
table.whitelist=PANStatementInstanceLog
schema.pattern=dbo
Last i modified
vim /usr/local/confluent/etc/kafka/connect-standalone.properties
and here i modified below properties
bootstrap.servers=b-3.205147-ertrtr.erer.c5.ertert.us-east-1.amazonaws.com:9092,b-6.ertert-riskaudit.ertet.c5.kafka.us-east-1.amazonaws.com:9092,b-1.ertert-riskaudit.ertert.c5.kafka.us-east-1.amazonaws.com:9092
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/usr/local/confluent/share/java
When i list topic i do not see any topic listed for table name .
Stack trace for the error message
[2020-01-03 07:40:57,169] ERROR Failed to create job for /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties (org.apache.kafka.connect.cli.ConnectStandalone:108)
[2020-01-03 07:40:57,169] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:119)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 2 error(s):
Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:116)
Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 2 error(s):
Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
at org.apache.kafka.connect.runtime.AbstractHerder.maybeAddConfigErrors(AbstractHerder.java:423)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:188)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:113)
curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" IPaddressOfKCnode:8083/connectors/ -d '{"name": "emp-connector", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "tasks.max": "1", "connection.url": "jdbc:mysql://IPaddressOfLocalMachine:3306/test_db?user=root&password=pwd","table.whitelist": "emp","mode": "timestamp","topic.prefix": "mysql-" } }'
schema registry is required ?
No. You can enable schemas in json records. JDBC source can create them for you based on the table information
value.converter=org.apache.kafka...JsonConverter
value.converter.schemas.enable=true
Mentioned zookeper but i did not what is kafkastore.topic=_schema
If you want to use Schema Registry, you should be using kafkastore.bootstrap.servers.with the Kafka address, not Zookeeper. So remove kafkastore.connection.url
Please read the docs for explanations of all properties
i did not do anything about schema .
Doesn't matter. The schemas topic gets created when the Registry first starts
Can i create two onnector on one ec2
Yes (ignoring available JVM heap space). Again, this is detailed in the Kafka Connect documentation.
Using standalone mode, you first pass the connect worker configuration, then up to N connector properties in one command
Using distributed mode, you use the Kafka Connect REST API
https://docs.confluent.io/current/connect/managing/configuring.html
When i open vim /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties
First of all, that's for Sqlite, not Mysql/Postgres. You don't need to use the quickstart files, they are only there for reference
Again, all properties are well documented
https://docs.confluent.io/current/connect/kafka-connect-jdbc/index.html#connect-jdbc
I do have jdbc connector installed and when i start i get below error
Here's more information about how you can debug that
https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector/
As stated before, I would personally suggest using Debezium/CDC where possible
Debezium Connector for RDS Aurora
I'm guessing that you're planning to use AVRO in order to transfer data so don't forget to specify AVROConverter as the default converter when you start up your Kafka Connect workers. If you will use JSON then Schema Registry is not needed.
1.1 kafkastore.topic=_schema
Have you started up your own schema registry? When you start Schema Registry you'll have to specify the "schemas" topic. Basically, this topic will be used by Schema Registry to store the schemas registered by it and in case of a failure, it can recover them from there.
1.2 jdbc connector installed and when i start i get below error
By default, JDBC Connector only works with SQLite and PostgreSQL. If you would like it to work with a MySQL database then you should add the MySQL Driver to the classpath as well.
2.It depends on how you are deploying your Kafka Connect workers. If you go for Distributed mode ( recommended ) then you don't really need separate CLI's. You can deploy your connectors through the Kafka Connect REST API.
3.There is another property called table.whitelist on which you can specify your schemas and tables. e.g: table.whitelistusers,products,transactions

SnappyData or SnappySession: SignalHandler: received explicit OS signal SIGPIPE

Get this error when sending data to the cluster:
2018-01-22 18:49:54 101 4859929 [SIGPIPE handler] WARN snappystore - SignalHandler: received explicit OS signal SIGPIPE
java.lang.Throwable: null
at com.pivotal.gemfirexd.internal.engine.SigThreadDumpHandler.handle(SigThreadDumpHandler.java:112)
at sun.misc.Signal$1.run(Signal.java:212)
at java.lang.Thread.run(Thread.java:745)
This means that there has been an unclean socket close on a receiver/sender so the OS has sent a SIGPIPE. Not a problem itself but could indicate some problem on the connection to a remote node. Usually happens when network connectivity to a remote node is down in the middle of operation or remote node has gone down abruptly. I would see the logs before and after to see if any exception was received from a remote node and then go check that node.

how to resolve error 17836, Severity: 20, State: 14?

Length specified in network packet payload did not match number of bytes read; the connection has been closed. Please contact the vendor of the client library.
Error: 17836, Severity: 20, State: 14. I am getting this error 5 times at the same time almost and want to know the reason for its occurrence.
See the solution here:
Since the SQL Server has Event ID 17836 logged, the SQL port is open.
It is more like authentication issue. Based on this articles
(Configuration for querying SQL database remotely –
http://www.howtonetworking.com/others/testsqlconnect2.htm ), we may
have 3 fixes:
creating SQL login ID (recommended)
join the computer to the domain
allow anonymous connections to SQL Server 2000 or to SQL Server 2005 (don’t recommend)
And this MSDN forum
Perform a nslookup of the CLIENT IP Address that is listed in the
error message and find out what computer it is that is connecting.
Then you need to check that machine and determine what specifically is
connecting to the SQL Server. You might get more infromation from
doing a SQL Trace for the Errors and Warnings Event Class and have the
ClientProcessID column in the trace data. When the error spikes, you
might get the PID for the process that is connecting from that
10.26.32.96 machine, and then you can find that process in Task Manager on that machine by adding the PID to the data displayed (View
-> Select Columns).
In my case, these events coincided with me doing telnet connectivity tests to our SQL Servers. We would see one entry in the Windows Event Log for each instance we successfully did a telnet to.

connect asp.net to mysql on linux error of connection

i can't hosting to connect to mysql database on linux i have this error message :
Unable to connect to any of the specified MySQL hosts
i need your help thx; this is my code
I suspect this message came from an exception thrown by your con.Open() call. The message means your attempt to Open() the connection failed because Connector/Net (the .net driver for mySQL) couldn't find the server you asked for. (Never mind the pluralization in "any of the specified MySQL hosts" ; that is for a loadbalancing / failover feature you're probably not using.)
If it took a few seconds for con.Open() to throw the exception, that means there was a timeout. That means the host at x.x.x.x did not respond at all, probably because it's not there or behind a firewall. If you're trying to connect from your home or office to a MySQL server at a hosting service, you may need to go into the hosting service's control panel and whitelist your own machine's IP address.
If Open() threw its exception quickly, it means the host is there, but it is not running a MySQL server.
Pro tip: Always wrap your Open() calls in their own try{}catch(){} clauses; failed database connection attempts are not an unexpected occurrence. Here is an explanation.