Escape every log entry before sending it to Logstash - logback

I have a SpringBoot app that sends every log entry to ELK stack using logback in combination with Slf4j. The encoder is LogstashEncoder. How can I escape every log entry right before sending it to ELK using LogstashTcpSocketAppender? Or the \n and \r are automatically escaped already before sending?

Related

How to read data from Kinesis stream using AWS CLI?

I have a Kinesis stream in AWS and can send data to it (JSON) using kinesis command and can get it back from a stream with:
SHARD_ITERATOR=$(aws kinesis get-shard-iterator --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON --stream-name mystream --query 'ShardIterator' --profile myprofile)
aws kinesis get-records --shard-iterator $SHARD_ITERATOR --profile myprofile
Output of this looks like something like:
HsKCQkidmlkZW9Tb3VyY2UiOiBbCgkJCXsKCQkJCSJicmFuZGluZyI6IHt9LAoJCQkJInByb21vUG9vbCI6IFtdLAoJCQkJImlkIjogbnVsbAoJCQl9CgkJXSwKCQkiaW1hZ2VTb3VyY2UiOiB7fSwKCQkibWV0YWRhdGFBcHByb3ZlZCI6IHRydWUsCgkJImR1ZURhdGUiOiAxNTgzMzEyNTA0ODAzLAoJCSJwcm9maWxlIjogewoJCQkiY29tcG9uZW50Q291bnQiOiAwLAoJCQkibmFtZSI6ICJTUUVfQVRfUFJPRklMRSIsCgkJCSJpZCI6ICJTUUVfQVRfUFJPRklMRV9JRCIsCgkJCSJwYWNrYWdlQ291bnQiOiAwLAoJCQkicGFja2FnZXMiOiBbCgkJCQl7CgkJCQkJIm5hbWUiOiAiUEVBQ09DSy1MVEEiLAoJCQkJCSJpZCI6ICJmZDk5NTRmZC03NDYwLTRjZjItOTU5Ni05YzBhMjcxNTViODgiCgkJCQl9CgkJCV0KCQl9LAoJCSJ3b3JrT3JkZXJJZCI6ICJTUUVfQVRfSk9CX1NVQk1JU1
How do I get actual JSON message in raw format (to look as JSON) - same way as it was in original when I sent it?
Thanks
As per the docs, you need to use a Base64 decoding tool or use KCL library to get the data in the format it was sent:
The first thing you'll likely notice about your record in this part of the tutorial is that the data appears to be garbage –; it's not the clear text testdata we sent. This is due to the way put-record uses Base64 encoding to allow you to send binary data. However, the Kinesis Data Streams support in the AWS CLI does not provide Base64 decoding because Base64 decoding to raw binary content printed to stdout can lead to undesired behavior and potential security issues on certain platforms and terminals. If you use a Base64 decoder (for example, https://www.base64decode.org/) to manually decode dGVzdGRhdGE= you will see that it is, in fact, testdata. This is sufficient for the sake of this tutorial because, in practice, the AWS CLI is rarely used to consume data, but more often to monitor the state of the stream and obtain information, as shown previously (describe-stream and list-streams). Future tutorials will show you how to build production-quality consumer applications using the Kinesis Client Library (KCL), where Base64 is taken care of for you. For more information about the KCL, see Developing KCL 1.x Consumers.
on unix, you can use the base64 --decode command to decode the base64 encoded kinesis record data.
for example, to decode the data of the first record:
# define the name of the stream you want to read
KINESIS_STREAM_NAME='__your_stream_name_goes_here__';
# define the shard iterator to use
SHARD_ITERATOR=$(aws kinesis get-shard-iterator --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON --stream-name $KINESIS_STREAM_NAME --query 'ShardIterator');
# read the records, use `jq` to grab the data of the first record, and base64 decode it
aws kinesis get-records --shard-iterator $SHARD_ITERATOR | jq -r '.Records[0].Data' | base64 --decode

Trying to sort out a Json ParserError: Unexpected end-of-input within/between Object entries

I've got a rsyslog server sending logs to logstash (6.8). For the (very) vast majority of logs that I receive (windows, unix, etc) everything parses just fine.
input {
codec => json {
charset => "ISO-8859-1"
}
}
}
However when I receive some logs (I stress some) for event ID 4688 that contain binary at the end of the log event like the below example - I receive the error:
Json ParserError: Unexpected end-of-input within/between Object
entries
-NoProfile -NonInteractive -EncodedCommand KABOAGUAdwAtAEkAdABlAG0AIAAtAFQAeQBwAGUAIABEAGkAcgBlAGMAdABvAHIAeQAgAC0AUABhAHQAaAAgACQAZQBuAHYAOgB0AGUAbQBwACAALQBOAGEAbQBlACAAIgBhAG4AcwBpAGIAbABlAC0AdABtAHAALQAxADQAMwAzADAAOAA3ADYANQA5AC4AMgA4AC0ANwA0ADQANAA1ADMANgA1ADQAMQA2ADgANwAyACIAKQAuAEYAdQBsAGwATgBhAG0AZQAgAHwAIABXAHIAaQB0AGUALQBIAG8AcwB0ACAALQBTAGUAcABhAHIAYQB0AG8AcgAgACcAJwA7AA==
I'm at a loss at how to parse it. I don't really care about the binary, is there a way for me to chop off a log event after it see's -EncodedCommand before it gets run through a filter? All other 4688 events parse without any issue and a grok filter (if it makes it past the json parse) passes without issue either.

How to log JSON from a GCE Container VM to Stackdriver?

I'm currently using GCE Container VMs (not GKE) to run Docker Containers which write their JSON formated log to the Console. The Log Information is automatically collected and stored in Stackdriver.
Problem: Stackdriver displays the data-field of the jsonPayload as text - not as JSON. It looks like the quotes of the fields within the payload are escaped and therefore not recognized as JSON structure.
I used both, logback-classic (like explained here) and slf4j/log4j (using JSONPattern) to generate JSON output (which looks fine), but the output is not parsed correctly.
I assume that, I have to configure somewhere that the output is JSON structured, not plain text. So far I haven't found an option where to do this when using a Container VM.
What does your logger output into stdout?
You shouldn't create a jsonPayload field yourself in your log output. That field gets automatically created when your logs get parsed and meet certain criteria.
Basically, write your log message into a message field of your JSON output and any additional data as additional fields. Stackdriver strips all special fields from your JSON payload and if there is nothing left then your message will end up as textPayload otherwise you'll get a jsonPayload with your message and other fields.
Full documentation is here:
https://cloud.google.com/logging/docs/structured-logging

" (Quotes) are getting converted into """ in Jboss logs

I am facing an issue in which I am returning JSON string as AJAX Response. I have logging framework in-between which makes use of "periodic-rotating-file-handler" for logging the messages.
One of the user message which is getting logged is JSON String which doesn't come in correct format and " (Quotes) are getting converted into "&quot" in the Log file

How to add an encoder for socket appender

I am using Logback socket appender, and everything is ok, I can get log from socket.
My scenario is: we have a distributed app, all logs will be saved into to a log server's log file with SocketAppender. I just use SimpleSocketServer provided in Logback to get log from all apps. And the logs can be got and saved.
But, the only problem is, for socket appender, no encoder can be added, and the log message will be formatted maybe in some default format. But I must save them in some format.
A way I can find is to write a log server like SimpleSocketServer, and the log server will get the serialized object (ILoggingEvent), and format the object myself.
But in this way, I need to write too many codes. I think there should be one convenient way to add a encoder.
I don't think you need to worry about the serialized version. You will give the SocketAppender on the various clients String messages.
Then, so long as you configure the SimpleSocketServer to use the your desired Encoder in its configuration, all your messages should be in the correct format on disk.