Logging into different files based on some condition - logback

We have an application. In which we have a condition.Based on the condition, if it is true then we will write some log messages to one file else we will log the messages to another file.
And logging should be happened based on the condition and not based on log level.
How it is possible in dropwizard using yaml file?

This is supported out of the box. Here is my example:
server:
rootPath: /api/*
requestLog:
appenders: []
applicationConnectors:
- type: http
port: 9085
logging:
level: INFO
loggers:
"my-log-1":
level: DEBUG
additive: false
appenders:
- type: file
currentLogFilename: /home/artur/var/log/test1.log
archivedLogFilenamePattern: /home/artur/var/log/test1.log%d.log.gz
archivedFileCount: 5
logFormat: '[%level] %msg%n'
"my-log-2":
level: DEBUG
additive: false
appenders:
- type: file
currentLogFilename: /home/artur/var/log/test2.log
archivedLogFilenamePattern: /home/artur/var/log/test2.log%d.log.gz
archivedFileCount: 5
NOTE: You can not use tabs in the configuration.
This configuration creates 2 loggers. First is called "my-log-1", second is called "my-log-2".
You can now create these loggers in your java class, for example in my application:
public class Application extends io.dropwizard.Application<Configuration>{
private static final Logger log = Logger.getLogger("my-log-1");
private static final Logger log2 = Logger.getLogger("my-log-2");
#Override
public void run(Configuration configuration, Environment environment) throws Exception {
log.info("Test1"); // writes to first file
log2.info("Test2"); // logs to seconds file
}
public static void main(String[] args) throws Exception {
new Application().run("server", "/home/artur/dev/repo/sandbox/src/main/resources/config/test.yaml");
}
}
Note the two loggers and their creation at the top of the file.
You can now use them like any logger. Add your condition and log away:
int random = new Random().nextInt();
if(random % 2 == 0) {
log.info("Test1"); // writes to first file
} else {
log2.info("Test2"); // logs to seconds file
}
I hope that answers your question,
thanks,
Artur

Related

ShedLock - Not Executing

I am using shedlock library 4.20.0.
net.javacrumbs.shedlock shedlock-spring 4.20.0 net.javacrumbs.shedlock shedlock-provider-jdbc-template 2.1.0
The scheduler job is,
#scheduled(fixedRate = 5000)
#SchedulerLock(name = "TaskScheduler__scheduledTask", lockAtLeastForString = "PT5M", lockAtMostForString = "PT14M")
public void reportCurrentTime() {
LockAssert.assertLocked();
log.info("The time is now {} {}", dateFormat.format(new Date()), dataSource);
}
It shows #SchedulerLock as deprecated.
And the spring boot class,
#SpringBootApplication
#EnableScheduling
#EnableSchedulerLock(defaultLockAtMostFor = "PT30S")
public class DMSCaseEmulatorSpringApplication {
public static void main(String[] args) {
SpringApplication.run(DMSCaseEmulatorSpringApplication.class, args);
}
}
When i execute the spring boot class, it triggers shedlock and creates a record in database table but in logs i keep getting as below,
19:54:39.188 [scheduling-1] DEBUG n.j.s.c.DefaultLockingTaskExecutor - Locked TaskScheduler__scheduledTask.
19:54:39.188 [scheduling-1] INFO u.g.h.c.d.s.ScheduledTasks - The time is now 19:54:39 HikariDataSource (HikariPool-1)
19:54:39.205 [scheduling-1] DEBUG n.j.s.c.DefaultLockingTaskExecutor - Unlocked TaskScheduler__scheduledTask.
19:54:44.065 [scheduling-1] DEBUG n.j.s.c.DefaultLockingTaskExecutor - Not executing TaskScheduler__scheduledTask. It's locked.
19:54:49.062 [scheduling-1] DEBUG n.j.s.c.DefaultLockingTaskExecutor - Not executing TaskScheduler__scheduledTask. It's locked.
Any thoughts will be appreciated?
The issue is caused by lockAtLeastForString = "PT5M" By specifying that, you are saying that the lock should be held at least for 5 minutes even if the task finishes sooner.
Regarding the Deprecation warning, please consult the JavaDoc.

Overriding application.yml config properties with runtime.groovy properties in Grails 3

I'm having config issues in Grails 3 and want to hook into the config setup process, unless there's a better way
We have a plugin(jasypt) that requires config set in the application.yml or it throws errors(example below). We need to set the config values in runtime.groovy, but they don't override any values set in the yml
application.yml
jasypt:
algorithm: "PBEWITHSHA256AND256BITAES-CBC-BC"
providerName: "BC"
password: "-"
keyObtentionIterations: 10
runtime.groovy
jasypt {
algorithm = ExternalSecureKeyConfig.getInstance().jasypt.algorithm
providerName = ExternalSecureKeyConfig.getInstance().jasypt.providerName
password = ExternalSecureKeyConfig.getInstance().jasypt.password
keyObtentionIterations = ExternalSecureKeyConfig.getInstance().jasypt.keyObtentionIterations
}
Debugging during startup and runtime shows that none of the config gets overridden (password stays "-"). If I move config from the yml to application.groovy the runtime values get applied but I get the following error on startup
org.jasypt.exceptions.EncryptionInitializationException: If "encryptorRegisteredName" is not specified, then "password" (and optionally "algorithm" and "keyObtentionIterations") must be specified
Currently I'm hoping I can manually set the jsypt config values or hook into the groovy config setup to ensure they override the prev set yml values. Having trouble figuring how to do this or if it's possible
jasypt v 2.0.2 with Grails 3.3.10 (hibernate5, gorm 6.1)
Solve
Overriding the application.yml config with values from a custom config file. Still find is strand the runtime.groovy doesn't override yml config values.
class Application extends GrailsAutoConfiguration implements EnvironmentAware {
static void main(String[] args) {
GrailsApp.run(Application, args)
}
#Override
void setEnvironment(Environment environment) {
URL jasyptConfigUrl = getClass().classLoader.getResource('jasypt.groovy')
if (jasyptConfigUrl) {
def jasyptConfig = new ConfigSlurper().parse(jasyptConfigUrl)
environment.propertySources.addFirst(new MapPropertySource('jasypt.groovy', jasyptConfig))
}
}
}

GCP Dataflow pipeline no json rows being read/processed

Based on the WordCount Example, I am trying to read my own json data (instead of the shakespear txts).
I am running the pipeline with:
mvn compile exec:java -Dexec.mainClass=myPkg.myClass -Dexec.args=" \
--project=myProj \
--stagingLocation=gs://myBkt/stage \
--runner=BlockingDataflowPipelineRunner \
--output=gs://myBkt/output/out \
--defaultWorkerLogLevel=DEBUG"
the output from the console is as follows:
<date> com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner fromOptions
INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 68 files. Enable logging at DEBUG level to see which files will be staged.
<date> myPkg$GroupPublished apply
<date> myPkg$GroupPublished apply
INFO: GroupPublished/JsonToDatePosPlatKeyFn.out [PCollection]
<date> myPkg main
main
static void main(String[] args) {
...
Pipeline p = Pipeline.create(options);
p.apply(TextIO.Read.named("ReadJson").from(options.getInputFile()))
.apply(new GroupPublished())
.apply(ParDo.of(new FormatAsStringFn()))
.apply(TextIO.Write.named("WriteCounts").to(options.getOutput()));
}
GroupPublished transformation
static class GroupPublished extends PTransform<PCollection<String>,
PCollection<KV<DatePosPlatKey, Long>>> {
#Override
public PCollection<KV<DatePosPlatKey, Long>> apply(PCollection<String> lines) {
PCollection<DatePosPlatKey> keyList
= lines.apply(ParDo.of(new JsonToDatePosPlatKeyFn()));
PCollection<KV<DatePosPlatKey, Long>> keysCounted =
keyList.apply(Count.<DatePosPlatKey>perElement());
return keysCounted;
}
}
json row processing
static class JsonToDatePosPlatKeyFn extends DoFn<String, DatePosPlatKey>{
#Override
public void processElement(ProcessContext c) throws Exception {
JsonNode root = mapper.readTree(c.element());
for (JsonNode jsonFact : root) {
DatePosPlatKey key = new DatePosPlatKey(...construct...);
...manipulate...
c.output(key);
}
}
}
data class
#DefaultCoder(AvroCoder.class)
public static class DatePosPlatKey { ... }
stuff I've checked so far:
adding defaultWorkerLogLevel doesn't seem to make any difference to the console output
renaming the json file throws an error, so I know its been seen by TextIO
the json files have data in the format: {...}\n{...}\n...
no logging or dataflow job appears in the google cloud console
how can I better debug a complete lack of data?
can you see what I've done wrong?
Upon offline discussion it turned out the code was missing a call to p.run(), so the pipeline was only constructed but not executed.

Restlet is returning HTTP 415 for every POST request I make

I am getting this error every time I try to post data to my server:
Server logs:
Starting the internal [HTTP/1.1] server on port 9192
Starting facilitymanager.api.rest.FacilityManagerAPIRestWrapper application
2015-06-22 13:18:11 127.0.0.1 - - 9192 POST /devices/rename - 415 554 45 64 http://localhost:9192 Java/1.7.0_79 -
Stopping the internal server
However In the service handler I am stating that I will handle JSON messages as you can see here:
public static final class RenameDevice extends ServerResource {
#Post("application/json")
public String doPost() throws InterruptedException, ConstraintViolationException, InvalidChoiceException, JSONException {
configureRestForm(this);
final String deviceId = getRequest().getAttributes().get("device_id").toString();
final String newName = getRequest().getAttributes().get("new_name").toString();
return renameDevice(deviceId, newName).toString(4);
}
}
/**
* Enables incoming connections from different servers.
*
* #param serverResource
* #return
*/
#SuppressWarnings({ "unchecked", "rawtypes" })
private static Series<Header> configureRestForm(ServerResource serverResource) {
Series<Header> responseHeaders = (Series<Header>) serverResource.getResponse().getAttributes()
.get("org.restlet.http.headers");
if (responseHeaders == null) {
responseHeaders = new Series(Header.class);
serverResource.getResponse().getAttributes().put("org.restlet.http.headers", responseHeaders);
}
responseHeaders.add("Access-Control-Allow-Origin", "*");
responseHeaders.add("Access-Control-Allow-Methods", "GET, POST, PUT, OPTIONS");
responseHeaders.add("Access-Control-Allow-Headers", "Content-Type");
responseHeaders.add("Access-Control-Allow-Credentials", "false");
responseHeaders.add("Access-Control-Max-Age", "60");
return responseHeaders;
}
What am I missing here?
Thanks!
Edit: This is the full log concerning the request:
Processing request to: "http://localhost:9192/devices/rename"
Call score for the "org.restlet.routing.VirtualHost#54594d1d" host: 1.0
Default virtual host selected
Base URI: "http://localhost:9192". Remaining part: "/devices/rename"
Call score for the "" URI pattern: 0.5
Selected route: "" -> facilitymanager.api.rest.FacilityManagerAPIRestWrapper#d75d3d7
Starting facilitymanager.api.rest.FacilityManagerAPIRestWrapper application
No characters were matched
Call score for the "/devices/list" URI pattern: 0.0
Call score for the "/groups/rename" URI pattern: 0.0
Call score for the "/devices/rename" URI pattern: 1.0
Selected route: "/devices/rename" -> Finder for RenameDevice
15 characters were matched
New base URI: "http://localhost:9192/devices/rename". No remaining part to match
Delegating the call to the target Restlet
Total score of variant "[text/html]"= 0.25
Total score of variant "[application/xhtml+xml]"= 5.0E-4
Converter selected for StatusInfo: StatusInfoHtmlConverter
2015-06-22 13:28:31 127.0.0.1 - - 9192 POST /devices/rename - 415 554 45 67 http://localhost:9192 Java/1.7.0_79 -
POST /devices/rename HTTP/1.1 [415 Unsupported Media Type] ()
KeepAlive stream used: http://localhost:9192/devices/rename
sun.net.www.MessageHeader#2bf4dee76 pairs: {null: HTTP/1.1 415 Unsupported Media Type}{Content-type: text/html; charset=UTF-8}{Content-length: 554}{Server: Restlet-Framework/3.0m1}{Accept-ranges: bytes}{Date: Mon, 22 Jun 2015 12:28:31 GMT}
To obtain a full log one must invoke this line of code anywhere before opening the restlet/component server:
// Create a new Component.
component = new Component();
// Add a new HTTP server listening on default port.
component.getServers().add(Protocol.HTTP, SERVER_PORT);
Engine.setLogLevel(Level.ALL); /// <----- HERE
component.start();
I've found the problem! The thing is that a tagged #Post method must receive an argument.
So the method should be like this:
#Post("application/json")
public String doPost(Representation entity) throws InterruptedException, ConstraintViolationException,
InvalidChoiceException, JSONException, IOException {
configureRestForm(this);
final Reader r = entity.getReader();
StringBuffer sb = new StringBuffer();
int c;
// Reads the JSON from the input stream
while ((c = r.read()) != -1) {
sb.append((char) c);
}
System.out.println(sb.toString()); // Shows the JSON received
}
}
The Representation entity argument brings you the means to detect the media type you are receiving. But since I have my tag like #Post("application/json") I do not need to verify this again.
Imagine that I use just "#Post" instead of "#Post("application/json")", I would have to validate the media type (or types) this way:
#Post
public Representation doPost(Representation entity)
throws ResourceException {
if (entity.getMediaType().isCompatible(MediaType.APPLICATION_JSON)) {
// ...
}
// ...
}
A method with an #Post annotation is not required to receive an argument, unless you intend to receive a payload from your request.
If you want to filter on the media type of the incoming representation, use the "json" shortcut, as follow
#Post("json")
This will prevent you to test the media type of the representation.
The list of all available shortcut is available here. Most of them are quite simple to remember. The main reason to use shortcuts (or "extension" such as file extension) is that "xml" is related to several media types (application/xml, text/xml).
If you want to get the full content of the representation, simply call the "getText()" method, instead of using the getReader() and consume it.
If you want to support CORS, I suggest you to use the CorsService (available in the 2.3 version of the Restlet Framework.
Notice there exists a shortcut for getting the headers from a Request or a Response, just call the "getHeaders()" method.
Notice there exists a shortcut for getting the attributes taken from the URL, just call the "getAttribute(String) method.
Here is an updated version of your source code:
public class TestApplication extends Application {
public final static class TestPostResource extends ServerResource {
#Post
public String doPost(Representation entity) throws Exception {
final String deviceId = getAttribute("device_id");
final String newName = getAttribute("new_name");
System.out.println(entity.getText());
System.out.println(getRequest().getHeaders());
System.out.println(getResponse().getHeaders());
return deviceId + "/" + newName;
}
}
public static void main(String[] args) throws Exception {
Component c = new Component();
c.getServers().add(Protocol.HTTP, 8183);
c.getDefaultHost().attach(new TestApplication());
CorsService corsService = new CorsService();
corsService.setAllowedOrigins(new HashSet<String>(Arrays.asList("*")));
corsService.setAllowedCredentials(true);
corsService.setSkippingResourceForCorsOptions(true);
c.getServices().add(corsService);
c.start();
}
#Override
public Restlet createInboundRoot() {
Router router = new Router(getContext());
router.attach("/testpost/{device_id}/{new_name}", TestPostResource.class);
return router;
}
}

Couchbase: java client doesn't create document in server

I'm all new in couchbase, I copy/paste the first code example in (http://www.couchbase.com/communities/java/getting-started) in my Eclipse project but when I run it and check back the server i can't find the document, below is console output:
2014-03-16 17:30:42.390 INFO com.couchbase.client.CouchbaseConnection: Added {QA sa=/127.0.0.1:11210, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue
2014-03-16 17:30:42.390 INFO com.couchbase.client.CouchbaseClient: CouchbaseConnectionFactory{, bucket='trust', nodes=[http://localhost:8091/pools], order=RANDOM, opTimeout=2500, opQueue=16384, opQueueBlockTime=10000, obsPollInt=10, obsPollMax=500, obsTimeout=5000, viewConns=10, viewTimeout=75000, viewWorkers=1, configCheck=10, reconnectInt=1100, failureMode=Redistribute, hashAlgo=NATIVE_HASH}
2014-03-16 17:30:42.390 INFO com.couchbase.client.CouchbaseConnection: Connection state changed for sun.nio.ch.SelectionKeyImpl#f8ae79
2014-03-16 17:30:42.468 INFO com.couchbase.client.CouchbaseClient: viewmode property isn't defined. Setting viewmode to production mode
2014-03-16 17:30:42.703 INFO net.spy.memcached.auth.AuthThread: Authenticated to localhost/127.0.0.1:11210
and here is my java class:
public class Test_ {
public static void main(String[] args) throws Exception {
// (Subset) of nodes in the cluster to establish a connection
List hosts = Arrays.asList(new URI("http://localhost:8091/pools"));
// Name of the Bucket to connect to
String bucket = "trust";
// Password of the bucket (empty) string if none
String password = "HIDDEN";
// Connect to the Cluster
CouchbaseClient client = new CouchbaseClient(hosts, bucket, password);
// Store a Document
client.set("33", "test my java code").get();
// Retreive the Document and print it
//System.out.println(client.get("33"));
// Shutting down properly
client.shutdown();
}
}
Solved, it was just the firewall blocking the port 11210.