I'm new on Kafka. I have question about kafka configuration.
I want to using seperate server like below,
server1: kafka producer
server2: kafka broker, kafka consumer, zookeeper
But, I can't send message to broker.
And I got this error messages.
on console-producer(server1),
console stdout error message
`
[2016-05-24 16:41:11,823] ERROR Error when sending message to topic twitter with key: null, value: 3 bytes with error: Failed to update metadata after 60000 ms.(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
`
on kafka producer(server2),
console stdout error message
`
[2016-05-25 10:20:01,588] DEBUG Connection with /192.168.50.142 disconnected (org.apache.kafka.common.network.Selector)
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:160)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:141)
at org.apache.kafka.common.network.Selector.poll(Selector.java:286)
at kafka.network.Processor.run(SocketServer.scala:413)
at java.lang.Thread.run(Thread.java:745)
`
running commands are like below
server1 on kafka dir,
`
./bin/zookeeper-server-start.sh config/zookeeper.properties
./bin/kafka-server-start.sh config/server.properties
./bin/kafka-console-consumer.sh --zookeeper 192.168.50.142:2181 --from-beginning --topic twitter
./bin/kafka-topics.sh --create --zookeeper 192.168.50.142:2181 --replication-factor 1 --partitions 1 --topic twitter
`
and server2 on kafka dir,
`
./bin/kafka-console-producer.sh --broker-list 192.168.50.142:9092 --topic twitter
`
And my configuration are,
server1(IP: 192.168.50.155):
kafka/config/producer.properties
`
metadata.broker.list=192.168.50.142:9092
producer.type=sync
compression.codec=none
serializer.class=kafka.serializer.DefaultEncoder
`
server2(IP:192.168.50.142):
kafka/config/zookeeper.properties
`
dataDir=/tmp/zookeeper
clientPort=2181
maxClientCnxns=0
`
kafka/config/server.properties
`
broker.id=0
listeners=PLAINTEXT://0.0.0.0:9092
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
broker.id=0
port=9092
log.dir=/tmp/kafka-logs-1
delete.topic.enable=true
`
kafka.config/consumer.properties
`
zookeeper.connect=127.0.0.1:2181
zookeeper.connection.timeout.ms=6000
group.id=test-consumer-group
`
kafka_2.11-0.9.0.0
java 1.8.0_60
node v4.4.4
Should I need to change any configuration? Please give some help.
It seems your producer configurations are not correct.
kafka/config/producer.properties
bootstrap.servers=192.168.50.142:9092
key.serializer=org.apache.kafka.common.serialization.StringSerializer
value.serializer=org.apache.kafka.common.serialization.StringSerializer
Make the following changes in server.properties:
Change the second line to
listeners=PLAINTEXT://:9092
Add this line:
advertised.listeners=PLAINTEXT://192.168.50.142:9092
This line is needed because this is the hostname and port the broker will advertise to consumers and producers. And since you have a producer on another machine, this line is needed.
While writing any command in the terminal, use
<command> --zookeeper localhost:2181 <rest of it>
Hope this works.
You need to modify server.properties file with appropriate config value and to update /etc/hosts file for resolving your machine to IP.
Related
I want to send a payload from a producer topic to a consumer topic. I've created the channels locally & tried sending payload on producer topic. But the payload is not received on the consumer side.
I think this could be another in JSON formatting I've tried online JSON beautifiers but this is not helping.
Although it's a very slight chance, there is a possibility that there's something wrong with the code and the producer topic is not able to receive the payload. But I'm not able to confirm this.
You'll need to show code to solve your specific problem, but here is a simple example using kcat and jq
Producing
$ kcat -P -b localhost:9092 -t example
{"hello":"world"}
{"hello":"test data"}
Consume and parse
$ kcat -b localhost:9092 -C -t example -u | jq -r .hello
world
test data
The Kafka broker will not validate your JSON. The serialization library in your client might. So your issue could be any one of the following
If your serializer failed, and you aren't catching and logging that exception
You are not sending enough data for the producer buffer to clear, and so you should call .flush() method on the producer at some point.
You have some Kafka authorization enabled on your cluster and your producer is failing to connect/produce.
Some other connection setting is wrong in your code.
I built the input file (decoded base64 file into p12 file) as CERTIFICATE_PATH, P12_PASSWORD is password in secret, KEYCHAIN_PATH is defined. when I run the command on CLI, I get "1 item imported" success message. but when I run from *.yml file on GitHub action, I get "security: SecKeychainItemImport: One or more parameters passed to a function were not valid." error. any suggestions?
security import $CERTIFICATE_PATH -P $P12_PASSWORD -A -t cert -f pkcs12 -k $KEYCHAIN_PATH
CERTIFICATE_PATH - file that contains cert.p12 data,
KEYCHAIN_PATH is TEMP/app-signing.keychain-db
Another reason in Github actions could be that you are using the wrong environment.
Take a look at this ---> Difference between Github's "Environment" and "Repository" secrets?.
Set the right environment:
environment: production
found the issue.. was passing wrong cert file.. once added correct file in the security build , was able to get it working
I have Keycloak deployed in Kubernetes using the official codecentric chart. Now I want to make Keycloak logs into json format in order to export them to Kibana.
A comment to the original reply pointed to a cli command to do this.
cli:
# Custom CLI script
custom: |
/subsystem=logging/json-formatter=json:add(exception-output-type=formatted, pretty-print=false, meta-data={label=value})
/subsystem=logging/console-handler=CONSOLE:write-attribute(name=named-formatter, value=json)
It is a Java application that is running on Wildfly. If you check the main process that is running inside the pod, you will see something like:
/usr/lib/jvm/java/bin/java -D[Standalone] -server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Dorg.jboss.boot.log.file=/opt/jboss/keycloak/standalone/log/server.log -Dlogging.configuration=file:/opt/jboss/keycloak/standalone/configuration/logging.properties -jar /opt/jboss/keycloak/jboss-modules.jar -mp /opt/jboss/keycloak/modules org.jboss.as.standalone -Djboss.home.dir=/opt/jboss/keycloak -Djboss.server.base.dir=/opt/jboss/keycloak/standalone -Djboss.bind.address=10.217.0.231 -Djboss.bind.address.private=10.217.0.231 -b 0.0.0.0 -c standalone.xml
Important part here is the following:
-Dlogging.configuration=file:/opt/jboss/keycloak/standalone/configuration/logging.properties
So, the logging configuration is passed to the Java process as a JVM option, and read from the file on the path /opt/jboss/keycloak/standalone/configuration/logging.properties.
If you check the content of the file, it has a section like the following:
...
handler.CONSOLE=org.jboss.logmanager.handlers.ConsoleHandler
handler.CONSOLE.level=INFO
handler.CONSOLE.formatter=COLOR-PATTERN
handler.CONSOLE.properties=autoFlush,target,enabled
handler.CONSOLE.autoFlush=true
handler.CONSOLE.target=SYSTEM_OUT
handler.CONSOLE.enabled=true
...
You need to figure out what to change in this logging configuration to meet your JSON requirements. An example would be:
formatter.json=org.jboss.logmanager.formatters.JsonFormatter
formatter.json.properties=keyOverrides,exceptionOutputType,metaData,prettyPrint,printDetails,recordDelimiter
formatter.json.constructorProperties=keyOverrides
formatter.json.keyOverrides=timestamp\=#timestamp
formatter.json.exceptionOutputType=FORMATTED
formatter.json.metaData=#version\=1
formatter.json.prettyPrint=false
formatter.json.printDetails=false
formatter.json.recordDelimiter=\n
Then, in Kubernetes you can create a ConfigMap with the logging config that you want, define it as a volume in your pod/deployment, and mount it as a file to that exact path in the pod/deployment definition. If you do all steps correctly, you should be able to customize the logging format as you need.
We are using BirtActuate in our application in showing reports.
Actuate -----> JDBC driver --------> MysqlDB
We are aiming to TRACE errors that appears while connecting via JDBC to mysql.
We have followed instructions available at http://dev.mysql.com/doc/connector-j/en/connector-j-reference-configuration-properties.html
and tried making connection using following connection string:
jdbc:mysql://192.168.0.1/TestDB?interactiveClient=true&autoReconnect=true&profileSQL=true&traceProtocol=true
As per the documentation of logger parameter in link mentioned we found that
The name of a class that implements "com.mysql.jdbc.log.Log" that will
be used to log messages to. (default is
"com.mysql.jdbc.log.StandardLogger", which logs to STDERR)
We want to trap all errors in a file so we can send that to support people to help us solving issues. I do not really know how to do that.
Adding &profileSQL=true&traceProtocol=true to JDBC connection URL will cause extra traces to be logged by the BirtActuate's default's logger in directory which in present birtActuateServer is $BIRT_HOME/server/data/logs
Go to the logs directory and run on command prompt
> grep -rl com.mysql.jdbc.exceptions .
This command should list the files in which it has found "com.mysql.jdbc.exceptions" string
I've problem when configure zapcat on zabbix 2.0 followed link
http://www.kjkoster.org/zapcat/Tomcat_How_To.html
Then I try to use JMX but an error occured when run :
java -Dcom.sun.management.jxmremote -Dcom.sun.management.jmxremote.port=12345 -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.password.file=/usr/lib/jvm/java-6-openjdk-i386/jre/lib/management/jmxremote.password -Dcom.sun.management.jmxremote.access.file=/usr/lib/jvm/java-6-openjdk-i386/jre/lib/management/jmxremote.access -Djava.rmi.server.hostname=192.168.2.56 -Dcom.sun.management.jmxremote.ssl=true
and error is :
Error: Password file read access must be restricted: /usr/lib/jvm/java-6-openjdk-i386/jre/lib/management/jmxremote.password
So any solution for me ??
You don't need Zapcat with Zabbix 2.0. Look up using the Zabbix Java Gateway instead: https://www.zabbix.com/documentation/2.0/manual/concepts/java
However for this error. You need to set the restriction on your password conf file to chmod 600.