Is there a way to print values in gRPC Context into logs using logback? - logback

I am writing a new micro service using gRPC. Traditionally, when logging I used to populate the trace Id in MDC of logback and the logback configuration directly takes care of printing the trace id in all the log statements. With gRPC I am making use of Context for setting the trace id, but couldn't figure out a way to directly log them using the logback config file.
I have figured that Context is the right place to do this from this link How to intercept the headers from in call to one service and insert it to another request in gRPC-java?
Find below the pattern that I use to print values from MDC of logback in java projects.
<Pattern>%date{dd-MM-yyyy;HH:mm:ss.SSS}|[%mdc{CLIENT-ID}]|[%mdc{REQ-ID}]|[%thread] %-5level %logger{36} - %msg%n
</Pattern>
Is there a way to print the values from the Context into log statements directly like above? Is this even the right way to think about logging trace id in the logs when dealing with gRPC

Yes, you are on the right track by using the context. The way to do it is to write a custom layout (subclass LayoutBase<ILoggingEvent>) that will query the context and write it in the log. The code to query the context is:
Span span = ContextUtils.getValue(Context.current());
then to convert it to a string:
span.getContext().getTraceId().toLowerBase16()

Related

How can I externalize ISchedulerExecutorService to run tasks in an external hazelcast cluster(Hazecast 5.2) without using UserCodeDeployment?

I am working on externalizing our IScheduledExecutorService so I can run tasks externally on a external cluster. I am able to write a test and get the Runnable to actually run ONLY if I turn on UserCode deployment. If I want to change this task at all and run the tests again I get the below in my external cluster member's logs..
java.lang.IllegalStateException: Class com.mycompany.task.ScheduledTask is already in local cache and has conflicting byte code representation
I want to be able to change the task if I could and redeploy to Hazelcast to just handle it. I do this kind of thing with our external maps now. It can handle different versions of our objects using compact serialization.
Am I stuck using user code deployment for these functional objects? If I need to make a change to it I need to change the class name and redeploy to production. I'm hoping to get this task right the first time and not have to ever do that but I have a way of handling it if I do.
The cluster is already running in production and I'll have to add the following to each member
HZ_USERCODEDEPLOYMENT_ENABLED=true
and the appropriate client code(listed below) to enable this.
What I've done...
Added the following to my local docker file
HZ_USERCODEDEPLOYMENT_ENABLED=true
and also in the code that creates a hazelcast client connecting to my external cluster with
ClientConfig clientConfig = new ClientConfig(); ClientUserCodeDeploymentConfig clientUserCodeDeploymentConfig = new ClientUserCodeDeploymentConfig(); clientUserCodeDeploymentConfig.addClass("com.mycompany.task.ScheduledTask"); clientUserCodeDeploymentConfig.setEnabled(true); clientConfig.setUserCodeDeploymentConfig(clientUserCodeDeploymentConfig);
However, if I remove those two pieces I get the following Exception with a failing test. It doesn't know about my class at all.
com.hazelcast.nio.serialization.HazelcastSerializationException: java.lang.ClassNotFoundException: com.mycompany.task.ScheduledTask
Side Note:
We are using compact serialization for several maps already and when I try to configure this Runnable task via compact serialization I get the below error. I don't think that's the right approach either.
[Scheduler: myScheduledExecutorService][Partition: 121][Task: 7afe68d5-3185-475f-b375-5a82a7088de3] Exception occurred during run
java.lang.ClassCastException: class com.hazelcast.internal.serialization.impl.compact.DeserializedGenericRecord cannot be cast to class java.lang.Runnable (com.hazelcast.internal.serialization.impl.compact.DeserializedGenericRecord is in unnamed module of loader 'app'; java.lang.Runnable is in module java.base of loader 'bootstrap')
at com.hazelcast.scheduledexecutor.impl.ScheduledRunnableAdapter.call(ScheduledRunnableAdapter.java:49) ~[hazelcast-5.2.0.jar:5.2.0]
at com.hazelcast.scheduledexecutor.impl.TaskRunner.call(TaskRunner.java:78) ~[hazelcast-5.2.0.jar:5.2.0]
at com.hazelcast.internal.util.executor.CompletableFutureTask.run(CompletableFutureTask.java:64) ~[hazelcast-5.2.0.jar:5.2.0]

How to display server Log in Primefaces Log Component

I see it is possible in the docs but can't seem to find a way of implementing it.
 Log API is also available via global PrimeFaces object in case you’d
like to use the log component to display your logs.
Using Primefaces 6.2
Primefaces Log Component
Binding Log4J to <p:log id="log" />
Everything in #Kukeltje's answer is true.
Still, if your end-game is to see your server logs in the front-end of your JSF app, I would do the following:
Add a DB appender to your logging framework so that all logs are written to the database.
https://logging.apache.org/log4j/2.x/manual/appenders.html#JDBCAppender
https://logback.qos.ch/manual/appenders.html#DBAppender
Create a jsf datatable to view the logs in. https://www.primefaces.org/showcase/ui/data/datatable/basic.xhtml
Initially, I would recommend that you filter the logs when retrieving it from the DB to only show the newest ones in the table, otherwise the table might be too big to load within a reasonable time. As a permanent solution, I would recommend that you implement the LazyDataModel
https://www.primefaces.org/showcase/ui/data/datatable/lazy.xhtml
It can be very handy to be able to filter and /or sort by log severity, time, and all the fields supported by by your logging framework.
Non related: Splunk has a universal forwarder utility that can submit a copy of your logs to a splunk server so that you can analyze your logs in near realtime. https://www.splunk.com/en_us/download/universal-forwarder.html
hope it is not too late.
I figured it out. From your Java Bean, you can use the PrimeFaces obj to execute a javascript in that way you can use for example the Primefaces.info('your log message'); to print in your log.
example function:
public void sendMessageToLog(int count) {
String text = "PrimeFaces.info('Testing Logger #" + count + "');";
PrimeFaces pf = PrimeFaces.current();
pf.executeScript(text); // This will execute the script in your page
}

spring batch: Dump a set of queries over a database in parallel to flat files

So my scenario drilled down to the essence is as follows:
Essentially, I have a config file containing a set of SQL queries whose result sets need to be exported as CSV files.
Since some queries may return billions of rows, and because something may interrupt the process (bug, crash, ...), I want to use a framework such as spring batch, which gives me restartabilty and job monitoring.
I am using a file based H2 database for persisting spring batch jobs.
So, here are my questions:
Upon creating a Job, I need to provide my RowMapper some initial configuration. So what happens when a job needs to be restarted after a e.g. crash? Concretly:
Is the state of the RowMapper automatically persisted, and upon restart Spring batch will try to restore the object from its database, or
will the RowMapper object be used that is part of the original spring batch XML config file, or
I have to maintain the RowMapper's state using the step's/job's ExecutionContext?
Above question is related to whether there is magic going on when using the spring batch XML configuration, or whether I could as well create all these beans in a programmatic way:
Since I need to parse my own config format into a spring batch job config, I rather just use spring batch's Java classes (beans) and fill them out appropriately, rather attempting to manually write out valid XML. However, if my Job crashes, I would create all the beans myself again. Does spring batch automagically restore the Job state from its database?
If I really need XML, is there a way to serialize a spring-batch JobRepository (or one of these objects) as a spring batch XML config?
Right now, I tried to configure my Step with the following code - but I am unsure if this is the proper way to do this:
Is TaskletStep the way to go?
Is the way I create the chunked reader/writer correct, or is there some other object which I should use instead?
I would have assumed that opening of the reader and writer would occur automatically as part of the JobExecution, but if I don't open these resources prior to running the Job, I get an exception telling me that I need to open them first. Maybe I need to create some other object that manages the resoures (jdbc connection and file handle)?
JdbcCursorItemReader<Foobar> itemReader = new JdbcCursorItemReader<Foobar>();
itemReader.setSql(sqlStr);
itemReader.setDataSource(dataSource);
itemReader.setRowMapper(rowMapper);
itemReader.afterPropertiesSet();
ExecutionContext executionContext = new ExecutionContext();
itemReader.open(executionContext);
FlatFileItemWriter<String> itemWriter = new FlatFileItemWriter<String>();
itemWriter.setLineAggregator(new PassThroughLineAggregator<String>());
itemWriter.setResource(outResource);
itemWriter.afterPropertiesSet();
itemWriter.open(executionContext);
int commitInterval = 50000;
CompletionPolicy completionPolicy = new SimpleCompletionPolicy(commitInterval);
RepeatTemplate repeatTemplate = new RepeatTemplate();
repeatTemplate.setCompletionPolicy(completionPolicy);
RepeatOperations repeatOperations = repeatTemplate;
ChunkProvider<Foobar> chunkProvider = new SimpleChunkProvider<Foobar>(itemReader, repeatOperations);
ItemProcessor<Foobar, String> itemProcessor = new ItemProcessor<Foobar, String>() {
/* Custom implemtation */ };
ChunkProcessor<Foobar> chunkProcessor = new SimpleChunkProcessor<Foobar, String>(itemProcessor, itemWriter);
Tasklet tasklet = new ChunkOrientedTasklet<QuadPattern>(chunkProvider, chunkProcessor); //new SplitFilesTasklet();
TaskletStep taskletStep = new TaskletStep();
taskletStep.setName(taskletName);
taskletStep.setJobRepository(jobRepository);
taskletStep.setTransactionManager(transactionManager);
taskletStep.setTasklet(tasklet);
taskletStep.afterPropertiesSet();
job.addStep(taskletStep);
Most of you questions are really complex and can be difficult give a good answer without write a long paper.
I'm new with spring-batch as you, and I found a lot of really useful info - and all the answers to your questions - reading Spring batch in action: it's completed, well explained, full of example and cover all aspects of framework (reader/writer/processor, job/tasklet/chunk lifecycle/persistence, tx/resources management, job flow, integration with other service, partitioning, restarting/retry, failure management and a lot of interesting things).
Hope to help

How to log SQL queries to a log file with CakePHP

I have a CakePHP 1.2 application that makes a number of AJAX calls using the AjaxHelper object. The AjaxHelper makes a call to a controller function which then returns some data back to the page.
I would like to log the SQL queries that are executed by the AJAX controller functions. Normally, I would just turn the debug level to 2 in config/core.php, however, this breaks my AJAX functionality because it causes the output SQL queries to be appended to the output that is returned to the client side.
To get around this issue, I would like to be able to log any SQL queries performed to a log file. Any suggestions?
I found a nice way of adding this logging functionality at this link:
http://cakephp.1045679.n5.nabble.com/Log-SQL-queries-td1281970.html
Basically, in your cake/libs/model/datasources/dbo/ directory, you can make a subclass of the dbo that you're using. For example, if you're using the dbo_mysql.php database driver, then you can make a new class file called dbo_mysql_with_log.php. The file would contain some code along the lines of the following:
App::import('Core', array('Model', 'datasource', 'dbosource', 'dbomysql'));
class DboMysqlWithLog extends DboMysql {
function _execute($sql) {
$this->log($sql);
return parent::_execute($sql);
}
}
In a nutshell, this class modifies (i.e. overrides) the _execute function of the superclass to log the SQL query before doing whatever logic it normally does.
You can modify your app/config/database.php configuration file to use the new driver that you just created.
This is a fantastic way to debug things like this, https://github.com/cakephp/debug_kit

Log4net - log parts of code, used in a couple of methods

I have some trouble.
My application could be divided to 3 logical parts (import, processing and export). There are some parts of code which are used in several parts of my application. How can I determine which part of code called my log4net object?
What is best practice to log info in parts of code which are called from several places in the application?
I want to turn on and off the ability to log parts of my application from a config file.
If I turn off logging for the processing part of my app, how could I log info in the export part of my app when both of them use one method, in which I initialize my logger object?
You could add a separate logger for each section of your app that you want to log and then turn them off and on as needed. They would all be independent from one another and this can all be setup via the config.
By setting the additivity property to false, the loggers will all be independent of one another. Here's an example of the config portion:
<logger name="Logger1" additivity="false">
<level value="INFO" />
<appender-ref ref="Logger1File" />
</logger>
To use it in your code, reference it like this:
private static ILog _Logger1= LogManager.GetLogger("Logger1");
Anything you log to Logger1 will be separate from any other logger, including the root one.
log4net provides contexts for this purpose. I would suggest using a context stack like this:
using(log4net.ThreadContext.Stacks["Part"].Push("Import"))
log.Info("Message during importing");
using(log4net.ThreadContext.Stacks["Part"].Push("Processing"))
log.Info("Message during processing");
using(log4net.ThreadContext.Stacks["Part"].Push("Export"))
log.Info("Message during exporting");
The value on the stack can be shown in the logs by including %property{Part} in a PatternLayout.