I am currently attempting to change our logs format in Quarkus from String to JSON with some additional fields that are important for our monitoring and data analysis in elastic/kibana.
So far I have added this dependency as specified in the official documentation
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-logging-json</artifactId>
</dependency>
https://quarkus.io/guides/logging
That changed the log format from a normal String to a full JSON format.
For example:
{"timestamp":"2022-09-05T13:30:09.314+01:00","sequence":24441,"loggerClassName":"org.jboss.logging.Logger","loggerName":"org.com.Controller","level":"INFO","message":"Test","threadName":"executor-thread-0","threadId":354,"mdc":{},"ndc":"","hostName":"hostname","processName":"test.jar","processId":9552}
My question is, how do I add additional fields to this log output, for instance I need to add and additional json field called 'pattern' with a value extracted from the code each time. The final json output will look like this:
{"timestamp":"2022-09-05T13:30:09.314+01:00","sequence":24441,"loggerClassName":"org.jboss.logging.Logger","loggerName":"org.com.Controller","level":"INFO","message":"Test","threadName":"executor-thread-0","threadId":354,"mdc":{},"ndc":"","hostName":"hostname","processName":"test.jar","processId":9552, "pattern" :"test-pattern"}
I tried the following as specified in the documentation:
quarkus.log.file.json.additional-field.pattern.value=test-value
quarkus.log.file.json.additional-field.pattern.type=string
But this didn't show anything, and Im not sure how to use it programmatically,
Example configuration
quarkus.log.console.json.additional-field."EXTRA".value=test-value
quarkus.log.console.json.additional-field."EXTRA".type=string
quarkus.log.file.json.additional-field."EXTRA".pattern.value=test-value
quarkus.log.file.json.additional-field."EXTRA".pattern.type=string
Should have double quotes. and example output
{"timestamp":"2022-09-18T14:37:37.687+01:00","sequence":1548,"loggerClassName":"org.jboss.logging.Logger","loggerName":"org.acme.GreetingResource","level":"INFO","message":"Hello","threadName":"executor-thread-0","threadId":101,"mdc":{},"ndc":"","hostName":"mintozzy-mach-wx9","processName":"code-with-quarkus-dev.jar","processId":133458,"EXTRA":"test-value"}
for full working example check
You might be able to solve your problem using the quarkiverse logging json extension using their JsonProvider
https://github.com/quarkiverse/quarkus-logging-json
The quarkiverse logging json extension is much more flexible/extensible than the standard quarkus logging json packages, since you can add fields to the json log programatically, instead of in hard coded configuration
Related
I want to convert incoming JSON data from Kafka into a dataframe.
I am using structured streaming with Scala 2.12
Most people add a hard coded schema, but if the json can have additional fields, it requires changing the code base every-time, which is tedious.
One approach is to write it into a file and infer it with but I rather avoid doing that.
Is there any other way to approach this problem?
Edit: Found a way to turn a json string into a dataframe but cant extract it from the stream source, it is possible to extract it?
One way is to store the schema itself in the message headers (not in the key or value).
Though, this increases message size, it will be easy to parse the JSON value without the need for any external resource like a file or a schema registry.
New messages can have new schemas while at the same time old messages can still be processed using their old schema itself, because the schema is within the message itself.
Alternatively, you can version the schemas and include an id for every schema in the message headers (or) a magic byte in the key or value and infer the schema from there.
This approach is followed by Confluent Schema registry. It allows you to basically go through different versions of same schema and see how your schema has evolved over time.
Read the data as string and then convert it to map[string,String], this way you can process the any json without even knowing its schema
based on JavaTechnical answer , the best approach would be to use a schema registry and
avro data instead of json, there is no going around hardcoding a schema (for now).
include your schema name and id as a header and use them to read the schema from the schema registry.
use the from_avro fucntion to turn that data into a df!
I'm currently using GCE Container VMs (not GKE) to run Docker Containers which write their JSON formated log to the Console. The Log Information is automatically collected and stored in Stackdriver.
Problem: Stackdriver displays the data-field of the jsonPayload as text - not as JSON. It looks like the quotes of the fields within the payload are escaped and therefore not recognized as JSON structure.
I used both, logback-classic (like explained here) and slf4j/log4j (using JSONPattern) to generate JSON output (which looks fine), but the output is not parsed correctly.
I assume that, I have to configure somewhere that the output is JSON structured, not plain text. So far I haven't found an option where to do this when using a Container VM.
What does your logger output into stdout?
You shouldn't create a jsonPayload field yourself in your log output. That field gets automatically created when your logs get parsed and meet certain criteria.
Basically, write your log message into a message field of your JSON output and any additional data as additional fields. Stackdriver strips all special fields from your JSON payload and if there is nothing left then your message will end up as textPayload otherwise you'll get a jsonPayload with your message and other fields.
Full documentation is here:
https://cloud.google.com/logging/docs/structured-logging
My program stack is ReactiveMongo 0.11.0, Scala 2.11.6, Play 2.4.2.
I'm adding PATCH functionality support to my Controllers. I want it to be type safe, so that PATCH would not mess the data in Mongo.
Current dirty solution of doing this, is
Reading object from Mongo first,
Performing JsObject.deepMerge with provided patch,
Checking that value can still be deserialized to target type.
Serializing merged object back to JsObject, and check, that patch contains only fields that are present in merged Json (So that there is no trash added to the stored object)
Call actual $set on mongo
This is obviously not perfect, but works fine. I would write macros to generate appropriate format generalization, but it might take too much time, which I currently lack of.
Is there a way to use Playframework Json macro generated format for partial entity validation like this?
Or any other solution, that can be easily integrated in Playframework for that matters.
With the help of #julien-richard-foy made a small library, to do exactly what I wanted.
https://github.com/clemble/scala-validator
Need to add some documentation, and I'll publish it to repository.
I am trying to use SpringXD to stream some JSON metrics data to a Oracle database.
I am using this example from here: SpringXD Example
Http call being made: EarthquakeJsonExample
My shell cmd.
stream create earthData --definition "trigger|usgs| jdbc --columns='mag,place,time,updated,tz,url,felt,cdi,mni,alert,tsunami,status,sig,net,code,ids,souces,types,nst,dmin,rms,gap,magnitude_type' --driverClassName=driver --username=username --password --url=url --tableName=Test_Table" --deploy
I would like to capture just the properties portion of this JSON response into the given table columns. I got it to the point where it doesn't give me a error on the hashing but instead just deposits a bunch of nulls into the column.
I think my problem is the parsing of the JSON itself. Since really the properties is in the Features array. Can SpringXD distinguish this for me out of the box or will I need to write a custom processor?
Here is a look at what the database looks like after a successful cmd.
Any advice? Im new to parsing JSON in this fashion and im not really sure how to find more documentation or examples with SpringXD itself.
Here is reference to the documentation: SpringXD Doc
The transformer in the JDBC sink expects a simple document that can converted to a map of keys/values. You would need to add a transformer upstream, perhaps in your usgs processor or even a separate processor. You could use a #jsonPath expression to extract the properties key and make it the payload.
I would like to log messages in a jSon format from within Java. I would like the convenience of professional logging like log4j for hierarchical loggers and method names, but I would also like to output other key-value names in the json object.
I am looking for output simillar to this:
{ 'time':'123' , level:'debug', action: 'open',filename:'bla.txt'}
{ 'time':'432' , level:'info', action: 'calculate',result:'353'}
If I use log4j and reformat I cannot get the automatic values (timestamp for example) in the same object as the logged values.
Is there a logging framework or plugin to solve this?
I started using this project
Works great
https://github.com/michaeltandy/log4j-json