Log4j, patternLayout, class and category - configuration

I am having trouble establishing the exact difference between using those two log4j conversion characters when used in a log4j PatternLayout (log4j patternLayout)
category (%c)
class (%C)
Can someone please give me an example where those two would be different?
Doesn't the category always match the class name?
Regards,

It will be the same if you initialize the logger in the popular way suggested by the documentation, and use it inside the X class:
Logger logger = Logger.getLogger(com.foo.X.class);
then you'll get the same for %c and %C, because logger name (constructed by "com.foo.X.class.getName()") would match the class name where a logging statement was issued.
Call your logger "something"
Logger logger = Logger.getLogger("something");
and you'll have "something" for %c and the class name for %C.
Note that %C is computed by log4j out of the current thread's stack trace, so it carries big performance impact, unlike %c, which is simply a String. You can conduct an interesting experiment to validate it:
package com.foo;
class A {
private Logger = Logger.getLogger(B.class);
// ...
logger.log("inside A class");
}
The output for pattern [%c][%m] assuming B is in package com.foo will be:
[com.foo.B][inside A class]
The output for pattern [%C][%m] regardless of the location of B will be:
[com.foo.A][inside A class]

Related

How to write the configuration of a list of objects for a Quarkus extension

I'm writing a Quarkus extension (internal usage) and cannot figure how to correctly write the configuration part.
I need to setup a configuration that looks like this, a list of objects containing several configuration properties.
some:
list-of-config:
- prop1: value1
prop2: value2
prop3: value3
- prop1: value4
prop2: value5
prop3: value6
So i was thinking to use #ConfigGroup and a List<> as in the following example:
#ConfigRoot(name = "some", phase = ConfigPhase.BUILD_TIME)
public class MyConfig {
#ConfigItem(defaultValue = "true")
public boolean enabled;
public List<MyGroup> listOfConfig;
#ConfigGroup
public static class MyGroup {
#ConfigItem
public String prop1;
#ConfigItem
public String prop2;
#ConfigItem
public String prop3;
}
}
Unfortunately i get the following exception when starting the application:
SRCFG00013: No Converter registered for class org....MyConfig$MyGroup
#ConfigGroup works pretty well when they are loaded as a single object, but not as a List. I guess this is not really related to config group itself since i also have this error without using config group but simple objects.
Has anyone already tried to use a list of object as configuration for a Quarkus extension ? The official documentation lacks of example about that.
According to the documentation, any type not being listed or accepting a String parameter as constructor/valueOf/of cannot be used with List or Optional types, except Optional that can also be applied to a configuration group object. Unfortunately, Optional + config group is bugged as mentioned here
So looks like custom objects are not supported, which answer my question.
Quarkus is working in adding the use of config mapping for extensions as well, which should solve the issues and add the support for complex types in the future versions (currently 2.1.0 at the time of this question)

Unable to serialize a nested python object using json.dumps()

I am new to python so sorry about the naive questions. I have a simple code snipper where I try to serialize a python object to a dictionary using json.dumps()
import json
class Document:
uid = "1"
content = "content1"
domain = "domain"
title = "title"
class ASSMSchema:
requestSource = "unittest"
documents = []
def entry():
myObj = ASSMSchema()
myObj.requestSource = "unittest"
document1 = Document()
document1.uid = "1"
document1.content = "content1"
document1.domain = "domain"
document1.title = "title"
myObj.documents.append(document1)
print(json.dumps(myObj.__dict__))
if __name__ == "__main__":
entry()
I get the following output when I run the above code
{"requestSource": "unittest"}
This is not expected however, since it should also seralize the List of "Document" objects. Appreciate your answers. Thanks in advance!
Your class definition of ASSMSchema defines the class members documents and requestSource. These are not attributes of a single instance of this class, but shared between all instances. When you are running myObj.requestSource = "unittest", you are defining a member variable on the instance myObj. This member is actually reflected in the output of json.dumps, whereas the class members (like documents) are not.
For further reading, see https://docs.python.org/3/tutorial/classes.html#class-and-instance-variables
Depending on the complexity and desired maintainability of your program, there are multiple approaches to archieve your desired behaviour. Firstly, you have to fix the mistake in both class definitions. To define a class with instance variables instead of class variables, do something like this:
class Foo:
# class variables go here
def __init__(self, field1, field2):
# This method is called when you write Foo(field1, field2)
# these are instance variables
self.field1 = field1
self.field2 = field2
If you want to dump this class as JSON, you can simply use the trick with __dict__: print(json.dumps(Foo(1,2).__dict__)) will output something like { "field1": 1, "field2": 2 }.
In your case, there is the documents member though, which is not JSON serializable by default. Therefore, you must handle this separately as well. You could write an encoder for your ASSMSchema (see this thread for more info on that). It could be implemented roughly like this:
from json import JSONEncoder
class ASSMSchemaEncoder(JSONEncoder):
def default(self, o):
return {
"requestSource": o.requestSource,
# Convert the list of Document objects to a list of dict
"documents": [d.__dict__ for d in o.documents]
}
Now, when serializing an instance of ASSMSchema, this implemention is used and the documents member is replaced with a list of dictionaires (which can be serialized by the default encoder). Note, that you have to specify this encoder when calling json.dumps, see the linked thread above.

Is there any equivalent to ScalaTest's withClue in Junit5 /Hamcrest?

I would like to provide the same textual message for a group of assertions, something like
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.*;
#Test
public void myTest() {
MyClass result = ...
withClue("the result {} does not conform", result.toString()) {
assertThat(result.id, notEmpty())
assertThat(result.xxxx, hasLength(33))
}
}
My expectation is that the "clue" will be shown before the first failed assertion. In Scala I'm used to ScalaTest withClue() that I usually use to show the full text representation of object (usually a JsonNode in my case) which usually allows me to understand better what went wrong.
So is there any way with vanilla JUnit, hamcrest of any other library dependency to get a message prepended to a group of assertions?

Python objects in dealloc in cython

In the docs it is written, that "Any C data that you explicitly allocated (e.g. via malloc) in your __cinit__() method should be freed in your __dealloc__() method."
This is not my case. I have following extension class:
cdef class SomeClass:
cdef dict data
cdef void * u_data
def __init__(self, data_len):
self.data = {'columns': []}
if data_len > 0:
self.data.update({'data': deque(maxlen=data_len)})
else:
self.data.update({'data': []})
self.u_data = <void *>self.data
#property
def data(self):
return self.data
#data.setter
def data(self, new_val: dict):
self.data = new_val
Some c function has an access to this class and it appends some data to SomeClass().data dict. What should I write in __dealloc__, when I want to delete the instance of the SomeClass()?
Maybe something like:
def __dealloc__(self):
self.data = None
free(self.u_data)
Or there is no need to dealloc anything at all?
No you don't need to and no you shouldn't. From the documentation
You need to be careful what you do in a __dealloc__() method. By the time your __dealloc__() method is called, the object may already have been partially destroyed and may not be in a valid state as far as Python is concerned, so you should avoid invoking any Python operations which might touch the object. In particular, don’t call any other methods of the object or do anything which might cause the object to be resurrected. It’s best if you stick to just deallocating C data.
You don’t need to worry about deallocating Python attributes of your object, because that will be done for you by Cython after your __dealloc__() method returns.
You can confirm this by inspecting the C code (you need to look at the full code, not just the annotated HTML). There's an autogenerated function __pyx_tp_dealloc_9someclass_SomeClass (name may vary slightly depending on what you called your module) does a range of things including:
__pyx_pw_9someclass_9SomeClass_3__dealloc__(o);
/* some other code */
Py_CLEAR(p->data);
where the function __pyx_pw_9someclass_9SomeClass_3__dealloc__ is (a wrapper for) your user-defined __dealloc__. Py_CLEAR will ensure that data is appropriately reference-counted then set to NULL.
It's a little hard to follow because it all goes through several layers of wrappers, but you can confirm that it does what the documentation says.

Spock mock returns null inside collabolator but not in feature method

I have a problem with Spock Mock() object.
I have a java class I'm trying to test. This class does some ftp stuff I want to mock.
My sample code
class ReceiveDataTest extends Specification{
String downloadPath = 'downloadPath';
String downloadRegex = 'downloadRegex';
SftpUtils sftpUtils = Mock();
ReceiveData receiveData;
def setup(){
sftpUtils.getFileNames(downloadPath,downloadRegex) >> ['file1', 'file2']
receiveData= new ReceiveData()
receiveData.setDownloadPath(downloadPath)
receiveData.setDownloadRegex(downloadRegex)
receiveData.setSftpUtils(sftpUtils);
}
def "test execute"() {
given:
def files = sftpUtils.getFileNames(downloadPath,downloadRegex)
files.each{println it}
when:
receiveData.execute();
then:
1*sftpUtils.getFileNames(downloadPath,downloadRegex)
}
}
public class ReceiveData(){
//fields, setters etc
public void execute() {
List<String> fileNames = sftpUtils.getFileNames(downloadPath, downloadRegex);
for (String name : fileNames) {
//dowload and process logic
}
}
}
Now, inside "test execute" the files.each{} prints what is expected. But when receiveData.execute() is called my sftpUtils are returning null..
Any ideas why?
EDIT
Maybe i didnt state my problem well - that I dont want to just check if getFileNames was called. I need the result to proper check the for loop. If I comment the loop inside execute, the test passes. But since I use the result of the getFilenames() method, I get a NPE execute method reaches the for loop. With mockito I would do something like this
Mockito.when(sftpUtils.getFilenames(downloadPath, downloadRegex)).thenReturn(filenamesList);
receiveData.execute();
Mockito.verify(sftpUtils).getFilenames(downloadPath, downloadRegex);
//this is what I want to test and resides inside for loop
Mockito.verify(sftpUtils).download(downloadPath, filenamesList.get(0));
Mockito.verify(sftpUtils).delete(downloadPath, filenamesList.get(0));
but I cannot use Mockito.verify() inside Spock then block
The main problem is that you did not include the response generator (the >> part) in the expectation (i.e. the "1 * ..." part inside the then: block).
This is explained well in the spock documentation.
http://spockframework.org/spock/docs/1.0/interaction_based_testing.html#_combining_mocking_and_stubbing
https://spock-framework.readthedocs.org/en/latest/interaction_based_testing.html#wheretodeclareinteractions
You shouldn't have to declare your stub in the setup: block. You can just specifiy it once in the then: block -- even though that follows the call to receiveData.execute(). That's part of the magic of spock thanks to Groovy AST transformations. And since (non-shared) fields are reinitialized before each test (more AST based magic), you don't even need setup() in this case.
Another odd thing is that you are both stubbing out sftpUtils.getFilenames() and also calling it from the test code. Mocks and stubs are intended to replace collaborators that are called from the system under test. There's no reason to call the stub from the test driver. So delete the call to getFilenames() from your given block and let the code under test call it instead (as it does).
Groovy lets you simplify calls to Java set and get methods. Look at the initialization of receiveData below. Its okay to use def in Groovy. Let the compiler figure out the data types for you.
Leading to something like:
class ReceiveDataTest extends Specification {
// only use static for constants with spock
static String PATH = 'downloadPath'
static String REGEX = 'downloadRegex'
def mockSftpUtils = Mock(SftpUtils)
def receiveData = new ReceiveData(downloadPath : PATH,
downloadRegex : REGEX,
sftpUtils : mockSftpUtils)
def "execute() calls getFileNames() exactly once"() {
when:
receiveData.execute()
then:
1 * mockSftpUtils.getFileNames(PATH, REGEX) >> ['file1', 'file2']
0 * mockSftpUtils.getFileNames(_,_)
// The second line asserts that getFileNames() is never called
// with any arguments other than PATH and REGEX, aka strict mocking
// Order matters! If you swap the lines, the more specific rule would never match
}
}