We recently migrated from Splunk to ELK. We wanted to log our message as json for better searchability in Kibana.
Our application was using vert.x 3.9. I came across https://reactiverse.io/reactiverse-contextual-logging but that requires vertx-core to be updated to 4.x. This will be a major change for us and I looked for other options. Also, I joined this team recently and new to Vert.x
I came across net.logstash.logback:logstash-logback-encoder and able to log the messages as json. I used io.vertx.core.json.JsonObject to convert my message and key values to convert to json. I created a wrapper class that returns the string for given message and key/values as shown below.
public class KeyValueLogger {
public static String getLog(final String message) {
return new JsonObject().put("message", message).encode();
}
public static String getLog(final String message, final Map<String, Object> params) {
return new JsonObject().put("message", message).mergeIn(new JsonObject(params)).encode();
}
}
Every log message will call the above KeyValueLogger.getLog to get json message. Since Vert.x is a reactive application, is there a better solution to convert the log messages to json? Most of the log messages are in worker threads. I am afraid if any is in event loop thread then it might impact the performance.
Thanks in advance!
I am able to use logstash-logback-encoder by referring to https://github.com/logfellow/logstash-logback-encoder/tree/logstash-logback-encoder-6.6#readme in section Examples using Markers: . This eliminated custom code to convert the message and key values conversion to json. I tested the performance and it is really good and no need of reactive/async for my use case.
Sample Code
import java.util.Map;
import static net.logstash.logback.marker.Markers.appendEntries;
#Test
public void logTest(){
Map<String, String> map = ImmutableMap.of("p1", "v1", "p2", "v2");
log.info(appendEntries(map), "startTest 100");
log.info("startTest {} , key1={}, key2= {}", 101, "v1", "v2");
log.info(appendEntries(map), "startTest {} , key1={}, key2= {}", 101, "v1", "v2");
log.info("startTest 102");
log.error(appendEntries(map), "error occured", new RuntimeException());
}
Sample logback configuration
Note: <logstashMarkers/> is required to log the Map as key values in the json log output.
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp/>
<logLevel/>
<contextName/>
<threadName>
<fieldName>thread</fieldName>
</threadName>
<callerData>
<fieldName>src</fieldName>
<classFieldName>class</classFieldName>
<methodFieldName>method</methodFieldName>
<fileFieldName>file</fileFieldName>
<lineFieldName>line</lineFieldName>
</callerData>
<logstashMarkers/>
<pattern>
<pattern>
{
"message": "%message"
}
</pattern>
</pattern>
<stackTrace>
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<maxDepthPerThrowable>30</maxDepthPerThrowable>
<maxLength>2048</maxLength>
<shortenedClassNameLength>20</shortenedClassNameLength>
<exclude>^sun\.reflect\..*\.invoke</exclude>
<exclude>^net\.sf\.cglib\.proxy\.MethodProxy\.invoke</exclude>
<rootCauseFirst>true</rootCauseFirst>
</throwableConverter>
</stackTrace>
</providers>
</encoder>
</appender>
Performance test
Added the following performance test and it took ~ 0.08286483 milli seconds = 82864.8304 nano seconds per log message on average.
private static Map<String, Object> getMap(){
int count = 5;
Map<String, Object> map = new HashMap<String, Object>();
for(int i=0; i<count; ++i){
map.put("key"+i, UUID.randomUUID());
}
return map;
}
#Test
public void logPerformanceTest(){
long startTime = System.nanoTime();
final int COUNT = 10000;
for(int i=0; i<COUNT; i++) {
log.info(appendEntries(getMap()), "startTest 100");
}
long time = System.nanoTime() - startTime;
System.out.println("###### TOOK " + time + " (ns) ");
}
There might be a better solution to convert the log messages to JSON in Vert.x. One option could be using the Vert.x event bus to send log messages from worker threads to a dedicated logger verticle. This way, the logger verticle can handle the JSON conversion and logging, freeing up the worker threads from performing additional tasks.
This approach also ensures that the logging process does not impact the performance of the event loop thread, as it's running in a separate verticle.
First, you need to create a logger verticle that will handle the JSON conversion and logging. This verticle can subscribe to a specific address on the event bus to receive log messages. The logger verticle can look like this:
public class LoggerVerticle extends AbstractVerticle {
#Override
public void start() {
EventBus eventBus = vertx.eventBus();
eventBus.consumer("logger.address", message -> {
JsonObject logMessage = (JsonObject) message.body();
// Convert log message to JSON format and log it
System.out.println(logMessage.encodePrettily());
});
}
}
Next, in your worker threads, you can send log messages to the logger verticle using the event bus. To send a log message, you can create a JsonObject with the log message and other key-value pairs, and then send it to the logger.address on the event bus.
Map<String, Object> params = new HashMap<>();
params.put("timestamp", System.currentTimeMillis());
params.put("worker-thread", Thread.currentThread().getName());
JsonObject logMessage = new JsonObject(params);
logMessage.put("message", "Log message from worker thread");
EventBus eventBus = vertx.eventBus();
eventBus.send("logger.address", logMessage);
Finally, you need to deploy the logger verticle in your Vert.x application, so that it starts receiving log messages.
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new LoggerVerticle());
This way, you can ensure that log messages are converted to JSON format before logging and that the logging process does not impact the performance of the event loop thread.
It's like having a dedicated chef to cook the dishes and serve them, freeing up the waiter to focus on taking orders and managing the restaurant.
Related
We are using the forge-api-java-client. There is an issue in Model Derivatives getManifest call.
The response fails mapping with a single Message String being returned instead of the expected String Array.
Have switched to using local build of the jar, change in file Message.java to include an alternative constructor for the class setMessage
public void setMessage(String message) {
List<String> messages = new ArrayList<>();
messages.add(message);
setMessage(messages);
}
Could this change be merged into the project.
We'll check it, but as of today, that package is just under maintenance. You are welcome to submit a PR.
I'm getting an error when unmarshalling files that only contain a single JSON object: "IllegalStateException: The Json input stream must start with an array of Json objects"
I can't find any workaround and I don't understand why it has to be so.
#Bean
public ItemReader<JsonHar> reader(#Value("file:${json.resources.path}/*.json") Resource[] resources) {
log.info("Processing JSON resources: {}", Arrays.toString(resources));
JsonItemReader<JsonHar> delegate = new JsonItemReaderBuilder<JsonHar>()
.jsonObjectReader(new JacksonJsonObjectReader<>(JsonHar.class))
.resource(resources[0]) //FIXME had to force this, but fails anyway because the file is "{...}" and not "[...]"
.name("jsonItemReader")
.build();
MultiResourceItemReader<JsonHar> reader = new MultiResourceItemReader<>();
reader.setDelegate(delegate);
reader.setResources(resources);
return reader;
}
I need a way to unmarshall single object files, what's the point in forcing arrays (which I won't have in my use case)??
I don't understand why it has to be so.
The JsonItemReader is designed to read an array of objects because batch processing is usually about handling data sources with a lot of items, not a single item.
I can't find any workaround
JsonObjectReader is what you are looking for: You can implement it to read a single json object and use it with the JsonItemReader (either at construction time or using the setter). This is not a workaround but a strategy interface designed for specific use cases like yours.
Definitely not ideal #thomas-escolan. As #mahmoud-ben-hassine pointed, ideal would be to code a custom reader.
In case some new SOF users stumble on this question, I leave here a code example on how to do it
Though this may not be ideal, this is how I handled the situation:
#Bean
public ItemReader<JsonHar> reader(#Value("file:${json.resources.path}/*.json") Resource[] resources) {
log.info("Processing JSON resources: {}", Arrays.toString(resources));
JsonItemReader<JsonHar> delegate = new JsonItemReaderBuilder<JsonHar>()
.jsonObjectReader(new JacksonJsonObjectReader<>(JsonHar.class))
.resource(resources[0]) //DEBUG had to force this because of NPE...
.name("jsonItemReader")
.build();
MultiResourceItemReader<JsonHar> reader = new MultiResourceItemReader<>();
reader.setDelegate(delegate);
reader.setResources(Arrays.stream(resources)
.map(WrappedResource::new) // forcing the bride to look good enough
.toArray(Resource[]::new));
return reader;
}
#RequiredArgsConstructor
static class WrappedResource implements Resource {
#Delegate(excludes = InputStreamSource.class)
private final Resource resource;
#Override
public InputStream getInputStream() throws IOException {
log.info("Wrapping resource: {}", resource.getFilename());
InputStream in = resource.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(in, UTF_8));
String wrap = reader.lines().collect(Collectors.joining())
.replaceAll("[^\\x00-\\xFF]", ""); // strips off all non-ASCII characters
return new ByteArrayInputStream(("[" + wrap + "]").getBytes(UTF_8));
}
}
I have been using Spring Integration DSL to implement some messaging processing flow.
How can I actually unit test a single IntegrationFlow, can anyone provide me with an example on how to unit test i.e. transform part of this bean:
#Bean
public IntegrationFlow transformMessage(){
return message -> message
.transform(new GenericTransformer<Message<String>, Message<String>>() {
#Override
public Message<String> transform(Message<String> message) {
MutableMessageHeaders headers =
new MutableMessageHeaders(message.getHeaders());
headers.put("Content-Type", "application/json");
headers.put("Accept", "application/json");
String payload = "Long message";
ObjectMapper mapper = new ObjectMapper();
HashMap<String, String> map = new HashMap<>();
map.put("payload", payload);
String jsonString = null;
try {
jsonInString = mapper.writeValueAsString(map);
} catch (JsonProcessingException e) {
logger.error("Error:" + e.getMessage());
}
Message<String> request = new GenericMessage<String>(jsonString
, headers);
return request;
}
})
.handle(makeHttpRequestToValidateAcdrMessage())
.enrichHeaders(h -> h.header("someHeader", "blah", true))
.channel("entrypoint");
}
How can I test it?
Regards!
Seems for me "unit testing" means check the behavior of the particular part of the system, some small component.
So, in your case it is about that new GenericTransformer.
so, just make it as a top-level component and perform tests against its isolated instances!
The integration tests can be performed against the target IntegrationFlow as well.
Each EIP-component in the flow definition is surrounded with
MessageChannels - input and output. Even if you don't declare .channel() there, the Framework build implicit DirrectChannel to wire endpoints to the flow.
Those implicit get the bean name like:
channelBeanName = flowNamePrefix + "channel" +
BeanFactoryUtils.GENERATED_BEAN_NAME_SEPARATOR + channelNameIndex++;
So, since your IntegrationFlow is from Lambda, the input channel form the .transform() is just input of the flow - transformMessage.input.
The channel between .transform() and the next .handle() has bean name like: transformMessage.channel#0, because it will be a first implicit channel declaration.
The idea that you can #Autowired both of this channels to your test-case and add ChannelInterceptor to them before testing.
The ChannelInterceptor may play verificator role to be sure that you send to the transformer and receive from the a proper data as it is expected.
More info can be found here: https://github.com/spring-projects/spring-integration-java-dsl/issues/23
The same techniques described in the testing-samples project in the samples repo can be used here.
The send a message to channel transform.input and subscribe to entrypoint to get the result (or change it to a QueueChannel in your test case.
Example of DSL IntegrationFlows testing is on github.
I want to use a Logback TurboFilter to modify the contents of logged messages. Specifically, I want to use Spring Security to prepend the username of the logged in user to the message (that way, I can use a simple grep on the logs to find log messages referring to a single logged-in user).
It looks like a custom TurboFilter is the way to go, but the documentation is somewhat unclear relating to what I want to do.
Here's a skeleton of what I want to do:
public class LoggingTurboFilter extends TurboFilter {
#Override
public FilterReply decide(Marker marker, Logger logger, Level level,
String format, Object[] params, Throwable t) {
final String userId = SecurityContextHolder.getContext().getAuthentication().getName();
// get message
final String output = String.format("[%s] %s", userid, message); // or just use +
// ?
return FilterReply.NEUTRAL;
}
}
This way, a log statement in the application log will look like this:
[smithjohn] entered methodName()
Can anyone assist?
Some use cases require being able to count the requests sent by the Apache API. For example, when massively requesting a web API, which API requires an authentication through an API key, and which TOS limits the requests count in time for each key.
Being more specific on the case, I'm requesting https://domain1/fooNeedNoKey, and depending on its response analyzed data, I request https://domain2/fooNeedKeyWithRequestsCountRestrictions. All sends of those 1-to-2-requests sequences, are performed through a single org.apache.http.impl.client.FutureRequestExecutionService.
As of now, depending on org.apache.httpcomponents:httpclient:4.3.3, I'm using those API elements:
org.apache.http.impl.client.FutureRequestExecutionService, to perform multi-threaded HTTP requests. It offers time metrics (how much time did an HTTP thread took until terminated), but no requests counter metrics
final CloseableHttpClient httpClient = HttpClients.custom()
// the auto-retry feature of the Apache API will retry up to 5
// times on failure, being also allowed to send again requests
// that were already sent if necessary (I don't really understand
// the purpose of the second parameter below)
.setRetryHandler(new StandardHttpRequestRetryHandler(5, true))
// for HTTP 503 'Service unavailable' errors, also retrying up to
// 5 times, waiting 500ms between each retry. Guessed is that those
// 5 retries are part of the previous "global" 5 retries setting.
// The below setting, when used alone, would allow to only enable
// retries for HTTP 503, or to get a greater count of retries for
// this specific error
.setServiceUnavailableRetryStrategy(new DefaultServiceUnavailableRetryStrategy(5, 500))
.build();, which customizes the Apache API retry behavior
Getting back to the topic :
A request counter could be created by extending the Apache API retry-related classes quoted before
Alternatively, an Apache API support unrelated ticket tends to indicate this requests-counter metrics could be available and forwarded out of the API, into Java NIO
Edit 1:
Looks like the Apache API won't permit this to be done.
Quote from the inside of the API, RetryExec not beeing extendable in the API code I/Os:
package org.apache.http.impl.execchain;
public class RetryExec implements ClientExecChain {
..
public CloseableHttpResponse execute(
final HttpRoute route,
final HttpRequestWrapper request,
final HttpClientContext context,
final HttpExecutionAware execAware) throws IOException, HttpException {
..
for (int execCount = 1;; execCount++) {
try {
return this.requestExecutor.execute(route, request, context, execAware);
} catch (final IOException ex) {
..
if (retryHandler.retryRequest(ex, execCount, context)) {
..
}
..
}
}
The 'execCount' variable is the needed info, and it can't be accessed since it's only locally used.
As well, one can extend 'retryHandler', and manually count requests in it, but 'retryHandler.retryRequest(ex, execCount, context)' is not provided with the 'request' variable, making it impossible to know on what we're incrementing a counter (one may only want to increment the counter for requests sent to a specific domain).
I'm out of Java ideas for it. A 3rd party alternative: having the Java process polling a file on disk, managed by a shell script counting the desired requests. Sure it will make a lot of disk read-accesses and will be a hardware killer option.
Ok, the work around was easy, the HttpContext class of the API is intended for this:
// optionnally, in case your HttpCLient is configured for retry
class URIAwareHttpRequestRetryHandler extends StandardHttpRequestRetryHandler {
public URIAwareHttpRequestRetryHandler(final int retryCount, final boolean requestSentRetryEnabled)
{
super(retryCount, requestSentRetryEnabled);
}
#Override
public boolean retryRequest(final IOException exception, final int executionCount, final HttpContext context)
{
final boolean ret = super.retryRequest(exception, executionCount, context);
if (ret) {
doForEachRequestSentOnURI((String) context.getAttribute("requestURI"));
}
return ret;
}
}
// optionnally, in addition to the previous one, in case your HttpClient has specific settings for the 'Service unavailable' errors retries
class URIAwareServiceUnavailableRetryStrategy extends DefaultServiceUnavailableRetryStrategy {
public URIAwareServiceUnavailableRetryStrategy(final int maxRetries, final int retryInterval)
{
super(maxRetries, retryInterval);
}
#Override
public boolean retryRequest(final HttpResponse response, final int executionCount, final HttpContext context)
{
final boolean ret = super.retryRequest(response, executionCount, context);
if (ret) {
doForEachRequestSentOnURI((String) context.getAttribute("requestURI"));
}
return ret;
}
}
// main HTTP querying code: retain the URI in the HttpContext to make it available in the custom retry-handlers code
httpContext.setAttribute("requestURI", httpGET.getURI().toString());
try {
httpContext.setAttribute("requestURI", httpGET.getURI().toString());
httpClient.execute(httpGET, getHTTPResponseHandlerLazy(), httpContext);
// if request got successful with no need of retries, of if it succeeded on the last send: in any cases, this is the last query sent to server and it got successful
doForEachRequestSentOnURI(httpGET.getURI().toString());
} catch (final ClientProtocolException e) {
// if request definitively failed after retries: it's the last query sent to server, and it failed
doForEachRequestSentOnURI(httpGET.getURI().toString());
} catch (final IOException e) {
// if request definitively failed after retries: it's the last query sent to server, and it failed
doForEachRequestSentOnURI(httpGET.getURI().toString());
} finally {
// restoring the context as it was initially
httpContext.removeAttribute("requestURI");
}
Solved.