Openresty custom json access log - json

I would like to write JSON strings on every request into the access logs, so it would be easier to consume it later.
I am using the print() exposed by Lapis/Openresty, however I would like to over ride the timestamp, log level and other information that is part of the nginx log format.
How can I override it?

To fill access log with json, you can use something like this in your nginx.conf:
log_format mydef "$json_log";
access_log logs/access.log mydef;
server {
...
set $json_log '';
log_by_lua_block {
local json = require "cjson"
ngx.var.json_log = json.encode({my_json_data = 1})
}
}
If you want to remove the default prefix in nginx error log, it is impossible yet since the format is hardcoded inside nginx's source.
However, you can provide your data in customized format during log_by_lua context to your consumer.

Related

Rsyslog - How can I parse json message and use as variable in if condition

My application is sending log which is json-formatted (nested) to rsyslog via UDP like below.
{"http": {"status_code": 400}}
I want to parse this log and use property as a variable in if condition like in rsyslog conf.
if ($!http!status_code >= 400) then {
/var/log/haproxy/haproxy-traffic-error.log
stop
}
Is this possible? This issue is making me crazy now.
I have looked for many articles about mmjsonparse, mmnormalize and etc...

How to log JSON from a GCE Container VM to Stackdriver?

I'm currently using GCE Container VMs (not GKE) to run Docker Containers which write their JSON formated log to the Console. The Log Information is automatically collected and stored in Stackdriver.
Problem: Stackdriver displays the data-field of the jsonPayload as text - not as JSON. It looks like the quotes of the fields within the payload are escaped and therefore not recognized as JSON structure.
I used both, logback-classic (like explained here) and slf4j/log4j (using JSONPattern) to generate JSON output (which looks fine), but the output is not parsed correctly.
I assume that, I have to configure somewhere that the output is JSON structured, not plain text. So far I haven't found an option where to do this when using a Container VM.
What does your logger output into stdout?
You shouldn't create a jsonPayload field yourself in your log output. That field gets automatically created when your logs get parsed and meet certain criteria.
Basically, write your log message into a message field of your JSON output and any additional data as additional fields. Stackdriver strips all special fields from your JSON payload and if there is nothing left then your message will end up as textPayload otherwise you'll get a jsonPayload with your message and other fields.
Full documentation is here:
https://cloud.google.com/logging/docs/structured-logging

Nginx return an empty json object with fake 200 status code

We've got an API running on Nginx, supposed to return JSON objects. This server has a lot of load so we did a lot of performance improvements.
The API recieves an ID from the client. The server has a bunch of files representing these IDs. So if the ID is found as a file, the contents of that file (Which is JSON) will be returned by the backend. If the file does not exists, no backend is called, Nginx simple sends a 404 for that, so we save performance (No backend system has to run).
Now we stumbled upon a problem. Due to old systems we still have to support, we cannot hand out a 404 page for clients as this will cause problems. What I came up with, is to return an empty JSON string instead ({}) with a 'fake' 200 status code. This needs to be a highly performant solution to still be able to handle all the load.
Is this possible to do, and if so, how?
error_page 404 =200 #empty_json;
location #empty_json {
return 200 "{}";
}
Reference:
http://nginx.org/r/error_page
http://nginx.org/r/return
http://nginx.org/r/location
You can always create a file in your document root called e.g. empty.json which only contains an empty object {}
Then in your nginx configuration add this line in your location block
try_files $uri /empty.json;
( read more about try_files )
This will check if the file requested by the client exists and if it does not exist it just shows empty.json instead. This produces a 200 HTTP OK and shows a {} to the requesting client.

Logging Multiple JSON Objects to a Single File - File Format

I have a solution where I need to be able to log multiple JSON Objects to a file. Essentially doing one log file per day. What is the easiest way to write (and later read) these from a single file?
How does MongoDB handle this with BSON? What does it use as a separator between "records"?
Does Protocol Buffers, BSON, MessagePack, etc... offer compression and the record concept? Compression would be a nice benefit.
With protocol buffers you could define the message as follows:
Message JSONObject {
required string JSON = 1;
}
Message DailyJSONLog {
repeated JSONObject JSON = 1;
}
This way you would just read the file from memory and deserialize it. Its essentially the same way for serializing them as well. Once you have the file (serialized DailyJSONLog) on disk, you can easily just append serialized JSONObjects to the end of that file (since the DailyJSONLog message is very simply a repeated field).
The only issue with this is if you have a LOT of messages each day or if you want to start at a certain location during the day (you're not able to easily get to the middle (or arbitrary) of the repeated list).
I've gotten around this by taking a JSONObject, serializing it and then base64 encoding it. I'd store these to a file separating by a new line. This allows you to very easily see how many records are in each file, gain access to any arbitrary JSON object within the file and to trivially keep expanding the file (you can expand the above 'repeated' message as well pretty trivially but it is a one way easy operation...)
Compression is a different topic. Protocol Buffers will not compress strings. If you were to define a pb message to match your JSON message, then you will get the benefit of having pb possibly 'compress' any integers into their [varint][1] encoded format. You will get 'less' compression if you try above base64 encoding route as well.

How to add an encoder for socket appender

I am using Logback socket appender, and everything is ok, I can get log from socket.
My scenario is: we have a distributed app, all logs will be saved into to a log server's log file with SocketAppender. I just use SimpleSocketServer provided in Logback to get log from all apps. And the logs can be got and saved.
But, the only problem is, for socket appender, no encoder can be added, and the log message will be formatted maybe in some default format. But I must save them in some format.
A way I can find is to write a log server like SimpleSocketServer, and the log server will get the serialized object (ILoggingEvent), and format the object myself.
But in this way, I need to write too many codes. I think there should be one convenient way to add a encoder.
I don't think you need to worry about the serialized version. You will give the SocketAppender on the various clients String messages.
Then, so long as you configure the SimpleSocketServer to use the your desired Encoder in its configuration, all your messages should be in the correct format on disk.