Log in JSON format to cloudfoundry logstash - json

We are using Swisscom application cloud (based on Cloudfoundry) and the provided Kibana/Logstash/Elasticsearch service. Now we d like to log in JSON format from our applications to logstash.
Thats why we integrated Logstash formatter into our wildfly swarm apps and since then they log in JSON format like:
{"#version":1,"#timestamp":"2018-07-24T18:28:51.291+0200","sequence":15299,"loggerClassName":"org.jboss.as.server.logging.ServerLogger_$logger","loggerName":"org.jboss.as.server.deployment","level":"INFO","threadName":"MSC service thread 1-2","message":"WFLYSRV0027: Starting deployment of \"hospush.war\" (runtime-name: \"hospush.war\")","threadId":31,"mdc":{},"ndc":""}
I also added a filter.conf to the logstash app on swisscom appcloud with the following content:
filter {
json {
source => "message"
}
}
When I check the logs of logstash now I can see that it throws an error and no logs are transferred to Kibana.
2018-07-24 20:33:15 [APP/PROC/WEB/0] OUT [2018-07-24T18:33:15,202][WARN ][logstash.filters.json ] Error parsing json {:source=>"message", :raw=>"<14>1 2018-07-24T18:33:15.018187+00:00 HosPush.demo-test.demo-test 3694b57f-bc05-459a-880a-17c174fc6d7c [APP/PROC/WEB/0] - - {\"#version\":1,\"#timestamp\":\"2018-07-24T20:33:15.017+0200\",\"sequence\":3796,\"loggerClassName\":\"org.slf4j.impl.Slf4jLogger\",\"loggerName\":\"com.hospush.business.escalation.EscalationService\",\"level\":\"INFO\",\"threadName\":\"EJB default - 3\",\"message\":\"Found 0 patientNeeds with no open notification.\",\"threadId\":154,\"mdc\":{},\"ndc\":\"\"}\n", :exception=>#<LogStash::Json::ParserError: Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
2018-07-24 20:33:15 [APP/PROC/WEB/0] OUT at [Source: (byte[])"<14>1 2018-07-24T18:33:15.018187+00:00 HosPush.demo-test.demo-test 3694b57f-bc05-459a-880a-17c174fc6d7c [APP/PROC/WEB/0] - - {"#version":1,"#timestamp":"2018-07-24T20:33:15.017+0200","sequence":3796,"loggerClassName":"org.slf4j.impl.Slf4jLogger","loggerName":"com.hospush.business.escalation.EscalationService","level":"INFO","threadName":"EJB default - 3","message":"Found 0 patientNeeds with no open notification.","threadId":154,"mdc":{},"ndc":""}
2018-07-24 20:33:30 [APP/PROC/WEB/0] OUT [2018-07-24T18:33:30,117][WARN ][logstash.filters.json ] Error parsing json {:source=>"message", :raw=>"<14>1 2018-07-24T18:33:30.002777+00:00 HosPush.demo-test.demo-test 3694b57f-bc05-459a-880a-17c174fc6d7c [APP/PROC/WEB/0] - - {\"#version\":1,\"#timestamp\":\"2018-07-24T20:33:30.001+0200\",\"sequence\":3797,\"loggerClassName\":\"org.slf4j.impl.Slf4jLogger\",\"loggerName\":\"com.hospush.business.escalation.OrphanEscalationScheduler\",\"level\":\"INFO\",\"threadName\":\"EJB default - 4\",\"message\":\"orphan escalation scheduler called...\",\"threadId\":155,\"mdc\":{},\"ndc\":\"\"}\n", :exception=>#<LogStash::Json::ParserError: Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
2018-07-24 20:33:30 [APP/PROC/WEB/0] OUT at [Source: (byte[])"<14>1 2018-07-24T18:33:30.002777+00:00 HosPush.demo-test.demo-test 3694b57f-bc05-459a-880a-17c174fc6d7c [APP/PROC/WEB/0] - - {"#version":1,"#timestamp":"2018-07-24T20:33:30.001+0200","sequence":3797,"loggerClassName":"org.slf4j.impl.Slf4jLogger","loggerName":"com.hospush.business.escalation.OrphanEscalationScheduler","level":"INFO","threadName":"EJB default - 4","message":"orphan escalation scheduler called...","threadId":155,"mdc":{},"ndc":""}
My guess is, that because of source => "message" logstash parses the message property as JSON which failes. What it should do is parsing the whole "root object" as json instead of only the message property.
Could that be and if yes, how do I need to adjust the filter.conf to make it work?
Thx a lot for your help guys.

Probably not relevant, but I guess the filter here is incorrect.
This says that their is a JSON structure in the message field that should be parsed as JSON. As I see, I do not see any message field, so this will not work.

Related

MySQL Sink Connector - JsonConverter - DataException: Unknown schema type: null

I have created MySQL sink connector and successfully run and i see the logs and got 200 response, But the sink connector not able to push the data into mysql db (data is available in the topic)
{
"name":"mysql-sink-connector",
"config":{ "tasks.max":"2",
"batch.size":"1000",
"batch.max.rows":"1000",
"poll.interval.ms":"500",
"connector.class":"io.confluent.connect.jdbc.JdbcSinkConnector",
"connection.url":"jdbc:mysql://mysql.azure.com:3306/db_test_dev",
"table.name.format":"tbl_clients_merchants",
"topics":"createorder",
"connection.user":"user",
"connection.password":"password",
"auto.create":"true",
"auto.evolve":"true",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable":"true",
"key.converter":"org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable":"true"
}}
getting an error
[2021-07-29 09:41:03,157] ERROR WorkerSinkTask{id=jdbc-mysql-sink-connector-1} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:177) org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error:
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:344)
at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.connect.errors.DataException: Unknown schema type: null
at org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:743
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162) ... 13 more
[2021-07-29 13:26:11,347] ERROR WorkerSinkTask{id=mysql-sink-connector-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:178)
As the error suggests, you have null tombstone records in the topic, which cannot be mapped to any MySQL columns since there are no field names.
Option 1) Fix your producer to not send these messages
Option 2) Filter them in Connect
For example,
"transforms": "Filter",
"transforms.Filter.type": "org.apache.kafka.connect.transforms.Filter",
"transforms.Filter.predicate" : "DropNull",
"predicates" : "DropNull",
"predicates.DropNull.type": "org.apache.kafka.connect.transforms.predicates.RecordIsTombstone"

Mule JSON validation Schema component avoid error logs

I'm using JSON Validation Schema component and I have notice, that it logs all the errors to the console.
I would like to avoid this error message being displayed in the console.
Even though I have chosen a special exception strategy which has catch exception strategy with JSONValidation Exception and has custom logic implemented and no loggers at all in it, I still see the following error message:
org.mule.api.MessagingException: Json content is not compliant with schema
com.github.fge.jsonschema.core.report.ListProcessingReport: failure
--- BEGIN MESSAGES ---
error: string "blah" is too long (length: 4, maximum allowed: 3)
level: "error"
schema: {"loadingURI":"file:/...}
instance: {"pointer":"/blah_blah_code"}
domain: "validation"
keyword: "maxLength"
value: "blah"
found: 4
maxLength: 3
--- END MESSAGES ---
How could I make mule omit this error message? I don't want these errors to be logged to the console.
You can set the logException attribute of the catch-exception-strategy element to false, forcing mule not to log errors to the console:
<catch-exception-strategy logException="false">
set below logger to false in log4j2.xml
<AsyncLogger name="org.mule.module.apikit.validation.RestJsonSchemaValidator" level="OFF"/>

How i should pass 2 variables for table.blacklist on kafka

I am trying to use "PUT" method in json to an existing kafka connector by trying blacklist two tables.(using postman)
{
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"table.blacklist": ["abc", "def"],
}
I tried to pass as a list, also tried dictionary i am getting an error . Tried passing as a string still same error .
Error:
-08-01 02:07:04,627] WARN (org.eclipse.jetty.servlet.ServletHandler:620)
javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: com.fasterxml.jackson.databind.JsonMappingException: Can not deserialize instance of java.lang.String out of START_ARRAY token
at [Source: org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream#68885dec; line: 2, column: 68] (through reference chain: java.util.LinkedHashMap["table.blacklist"])
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
This worked for me
"table.blacklist": "abc,def",

How to configure rsyslog template for Exception error for remote logging?

I'm using rsyslog to ship logs to a remote Logstash server, and the Logstash on that service expects input data in a json format. How can I configure an rsyslog template to json-ify a exception. For example, I want to send the following exception as a single message.
2017-02-08 21:59:51,727 ERROR :localhost-startStop-1 [jdbc.sqlonly] 1. PreparedStatement.executeBatch() batching 1 statements:
1: insert into CR_CLUSTER_REGISTRY (Cluster_Name, Url, Update_Dttm, Node_Id) values ('customer', 'rmi://ip-10-53-123.123.eu-west-1.compute.internal:1199/2', '02/08/2017 21:59:51.639', '2')
java.sql.BatchUpdateException: [Teradata JDBC Driver] [TeraJDBC 15.00.00.35] [Error 1338] [SQLState HY000] A failure occurred while executing a PreparedStatement batch request. Details of the failure can be found in the exception chain that is accessible with getNextException.
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeBatchUpdateException(ErrorFactory.java:148)
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeBatchUpdateException(ErrorFactory.java:137)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatchDMLArray(TDPreparedStatement.java:272)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatch(TDPreparedStatement.java:2584)
at com.teradata.tal.qes.StatementProxy.executeBatch(StatementProxy.java:186)
at net.sf.log4jdbc.StatementSpy.executeBatch(StatementSpy.java:539)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:266)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1028)
at com.teradata.tal.common.persistence.dao.SessionWrapper.flush(SessionWrapper.java:920)
at com.teradata.trm.common.persistence.dao.DaoImpl.save(DaoImpl.java:263)
at com.teradata.trm.common.service.AbstractService.save(AbstractService.java:509)
at com.teradata.trm.common.cluster.Cluster.init(Cluster.java:413)
at com.teradata.trm.common.cluster.NodeConfiguration.initialize(NodeConfiguration.java:182)
at com.teradata.trm.common.context.Initializer.onApplicationEvent(Initializer.java:73)
at com.teradata.trm.common.context.Initializer.onApplicationEvent(Initializer.java:30)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:97)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:324)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:929)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:467)
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:385)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:284)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:111)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4973)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5467)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:632)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1247)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1898)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: [Teradata Database] [TeraJDBC 15.00.00.35] [Error -2801] [SQLState 23000] Duplicate unique prime key error in CIM_META.CR_CLUSTER_REGISTRY.
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDatabaseSQLException(ErrorFactory.java:301)
at com.teradata.jdbc.jdbc_4.statemachine.ReceiveInitSubState.action(ReceiveInitSubState.java:114)
at com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.subStateMachine(StatementReceiveState.java:311)
at com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.action(StatementReceiveState.java:200)
at com.teradata.jdbc.jdbc_4.statemachine.StatementController.runBody(StatementController.java:137)
at com.teradata.jdbc.jdbc_4.statemachine.PreparedBatchStatementController.run(PreparedBatchStatementController.java:58)
at com.teradata.jdbc.jdbc_4.TDStatement.executeStatement(TDStatement.java:387)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatchDMLArray(TDPreparedStatement.java:252)
... 37 more
I have the following rsyslog configuration file. The startmsg.regex aims to "flag" the start of a new message when it sees the "YYYY-mm-dd" date format, and until it sees that format, it should treat any text following the date format as part of the current message.
input(type="imfile"
File="/usr/share/tomcat/dist/logs/trm-error.log*"
Facility="local3"
Tag="trm-error:"
Severity="error"
startmsg.regex="^[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}"
escapeLF="on"
)
if $programname == 'trm-error:' then {
action(
type="omfwd"
Target="10.53.234.234"
Port="5514"
Protocol="udp"
template="textLogTemplate"
)
stop
}
..and the following template.
# Template for non json logs, just sends the message wholesale with extra
# # furniture.
template(name="textLogTemplate" type="list") {
constant(value="{ ")
constant(value="\"type\":\"")
property(name="programname")
constant(value="\", ")
constant(value="\"host\":\"")
property(name="hostname")
constant(value="\", ")
constant(value="\"timestamp\":\"")
property(name="timestamp" dateFormat="rfc3339")
constant(value="\", ")
constant(value="\"#version\":\"1\", ")
constant(value="\"customer\":\"customer\", ")
constant(value="\"role\":\"app2\", ")
constant(value="\"sourcefile\":\"")
property(name="$!metadata!filename")
constant(value="\", ")
constant(value="\"message\":\"")
property(name="rawmsg" format="json")
constant(value="\"}\n")
}
However, Logstash complains about a "jsonparseerror" when it tries to parse the log as a json file. Any clues?
The rsyslog configuration files I'm using are correct, that is, Java exception log is indeed wrapped into a valid JSON file. However, Logstash is complaining about a _jsonparsefailure, so this problem is probably related to Logstash Ruby code, and not on the rsyslog side.

Solr return 400(bad request) while indexing the json file from localhost

I am new in Solr . I have setup solr in local pc. I am facing problem with the indexing of json file. I have one json file in local pc which i want to index in solr. it show some error which i have mention below.
Error
D:\Solr\Example\exampledocs>java -Durl=http://localhost:8983/solr/update -Dtype=application/json -jar post.jar timeline.json
SimplePostTool version 1.5
Posting files to base url http://localhost:8983/solr/update using content-type application/json..
POSTing file timeline.json
SimplePostTool: WARNING: Solr returned an error #400 (Bad Request) for url: http://localhost:8983/solr/update
SimplePostTool: WARNING: Response: {"responseHeader":{"status":400,"QTime":0},"error":{"msg":"Unknown command: Name [8]","code":400}}
SimplePostTool: WARNING: IOException while reading response: java.io.IOException: Server returned HTTP response code: 400 for URL: http://localhost:8983/solr/update
1 files indexed.
COMMITting Solr index changes to http://localhost:8983/solr/update..
Time spent: 0:00:00.167
Please help me , how can i solve it? Thanks in advance.
Log show
ERROR - 2014-07-30 18:38:52.330; org.apache.solr.common.SolrException; org.apache.solr.common.SolrException: Unexpected character '{' (code 123) in prolog; expected '<'
at [row,col {unknown-source}]: [1,1]
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:176)
at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1962)
at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
at org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Unknown Source)
Caused by: com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character '{' (code 123) in prolog; expected '<'
at [row,col {unknown-source}]: [1,1]
at com.ctc.wstx.sr.StreamScanner.throwUnexpectedChar(StreamScanner.java:648)
at com.ctc.wstx.sr.BasicStreamReader.nextFromProlog(BasicStreamReader.java:2047)
at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1069)
at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:213)
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:174)
... 32 more
Json file
{ "Name" : "Matches and Schedule", "timestamp" : { "$date" : 1400825267792 }, "_id" : { "$oid" : "537ee50494" } }
{ "Name" : "meet Modi", "timestamp" : { "$date" : 1401449841192 }, "_id" : { "$oid" : "53886d3a2c" } }
Can't comment yet.
Please post a snippet of your JSON file so we can comment.
Also, in these cases, Solr log files are your best friend, they will tell you exactly what it doesn't like about the data you are posting.
Also, you don't have to specify the host if you are sending to localhost, it is the default.
EDIT: Your JSON doesn't look correct. What do you expect it to do with stuff like:
"timestamp" : { "$date" : 1400825267792 }
First of, if you need timestamp is solr to be a date/time field it needs to be in UTC format. Second, Solr doesn't support nested elements.
Finally, if you are posting multiple documents via json, the format needs to look like this:
[ {"id":"doc1","field2":"val2"} , {"id":"doc2","field2":"val3"} ]
Note that all documents are enclosed in square brackets and separated by a comma.