Is there a way to convert the error.log in Nginx to json? I need to ship the logs to an external log viewer, to do that I need to convert the error.log to json.
Using filebeat may be an alternative depending on were you want to ship your logs (see https://www.elastic.co/guide/en/beats/filebeat/current/configuring-output.html)
Related
electron-builder generates "latest.yml" blockmap and exe for windows. But in production environment yml is not accepted. Need to change "latest.yml" to "latest.json". What are the configuration require to change "latest.yml" to "latest.json"?
electron-builder#^22.9.1
We tried it , there are no configuration options to change to json.We converted from yml to json at jenkins build . Electron-builder is using js-yaml node module to parse the yml response, which will accept both json and yml. If you send json instead of yml present version of electron-updater will accept and works fine.
I'm have successfully deployed Nexus3 on openshift cluster via Nexus3 Helm Chart.
In order to feed the nexus container logs in the EFK stack I want nexus container to output logs in JSON format.
I'm unable to find some documentation to change the logging format for Nexus3.
How can I modify the configurations in image so that the std output is in JSON and easily parsed?
I am trying to load data into Redshift using a Firehose delivery stream.
I am using a jsonpaths file uploaded to S3 at the following location.
s3://my_bucket/jsonpaths.json
This file contains the following jsonpaths config
{
"jsonpaths": [
"$['col_1']",
"$['col_2']",
"$['col_3']",
"$['col_4']"
]
}
To me this config looks ok, but the Firehose Redshift logs keep showing the following error.
"The provided jsonpaths file is not in a supported JSON format."
A similar error is seen even if I run the following copy command directly on the Redshift cluster.
reshift_db=# COPY my_schema.my_table
FROM 's3://my_bucket/data.json'
FORMAT JSON 's3://my_bucket/jsonpaths.json'
CREDENTIALS 'aws_iam_role=<role_arn>'
;
ERROR: Manifest file is not in correct json format
DETAIL:
-----------------------------------------------
error: Manifest file is not in correct json format
code: 8001
context: Manifest file location = s3://my_bucket/jsonpaths.json
query: yyyyy
location: s3_utility.cpp:338
process: padbmaster [pid=xxxxx]
-----------------------------------------------
Can someone help with what is going wrong here?
The problem in my case was a BOM (Byte Order Mark) at the beginning of the jsonpaths file. Some editors can save a file with BOM, and this does not show as characters when seen in the editor. And apparently Redshift does not like BOM at the beginning of the jsonpaths file.
For those of you who want to check if this is the case for your jsonpaths file, you can open the file in a hex editor. For the S3 file this can be done as follows.
# aws s3 cp s3://my_bucket/jsonpaths.json - | hexdump -C
To remove the BOM from the file you can do the following.
# aws s3 cp s3://my_bucket/jsonpaths.json - | dos2unix | aws s3 cp - s3://my_bucket/jsonpaths.json
Almost after 2 days of trying, and after having raised an AWS Support ticket, and having posted this question, it struct me that I should check the file in a hex editor.
I have been trying to set up keycloak logging to be scraped by fluentd to be used in elasticsearch. So far I have used the provided CLI string to use in my helm values script.
cli:
# Custom CLI script
custom: |
/subsystem=logging/json-formatter=json:add(exception-output-type=formatted, pretty-print=true, meta-data={label=value})
/subsystem=logging/console-handler=CONSOLE:write-attribute(name=named-formatter, value=json)
However, as you can see in the picture provided, the logs that are generated seem to be completely json apart from the core of the log, the message field. Currently the message field is provided as comma separated key-value pairs. Is there any way to tell keycloak, jboss or wildfly that it needs to provide the message in JSON too? This allows me to efficiently search through the data in elastic.
Check this project on GitHub: keycloak_jsonlog_eventlistener: Outputs Keycloak events as JSON into the server log.
Keycloak JSON Log Eventlistener
Primarily written for the Jboss Keycloak docker image, it will output Keycloak events as JSON into the keycloak server log.
The idea is to parse logs once they get to logstash via journalbeat.
Tested with Keycloak version 8.0.1
I am using Logback for logging. Scribe appenders send the logs in real time to a central Scribe aggregator. But I don't know how to add source machine IP in the logs for each log events. Looking at the aggregated central Scribe logs, it is almost impossible to know which machine is sending the logs. Hence, appending the IP of source machine to each log event will be helpful, and will be really great if we can control that through logback configuration.
It's possible to pass down hostname to remote receiver thru contextName.
Add following to logback.xml on all appenders:
<contextName>${HOSTNAME}</contextName>
Then, on aggregator instance, it will be available for inclusion in the pattern:
<pattern>%contextName %d %-5level %logger{35} - %msg %n</pattern>
According to the Logback docs, there's now a CanonicalHostNamePropertyDefiner expressly to add a hostname to your logs. Add a define to your project:
<define name="hostname"
class="ch.qos.logback.core.property.CanonicalHostNamePropertyDefiner"/>
and access it as ${hostname}
well if you are working on a client server project then u can use MDC feature of slf4j/logback full document here and in this case you can have a well structured log file that you can identify which log is for which client
hope this helps!