Use logger name if MDC key missing - logback

I am using logback with a third party package which sets an identifier in the MDC when its code is running. The rest of the time, this identifier is not set. So if I use a PatternLayout of [%X{id}] %m%n, then I see messages like
[Foo] Foo running
[Bar] Bar running
for messages related to the package. However, the rest of my log statements look like
[] Thing happened
The %X{id} is useful information when it exists, but I would like the logger name to be used when it is not. I tried
[%X{id:-%logger{20}}]
and
[%X{id:-logger{20}}]
but neither used the logger name as a default value.
I could write a custom layout that sets id to the logger name if it is not set, forwards to the layout and then clears field. Is there a simpler way to do this?

Related

Working on migration of SPL 3.0 to 4.2 (TEDA)

I am working on migration of 3.0 code into new 4.2 framework. I am facing a few difficulties:
How to do CDR level deduplication in new 4.2 framework? (Note: Table deduplication is already done).
Where to implement PostDedupProcessor - context or chainsink custom? In either case, do I need to remove duplicate hashcodes from the list or just reject the tuples? Here I am also doing column updating for a few tuples.
My file is not moving into archive. The temporary output file is getting generated and that too empty and outside load directory. What could be the possible reasons? - I have thoroughly checked config parameters and after putting logs, it seems correct output is being sent from transformer custom, so I don't know where it is stuck. I had printed TableRowGenerator stream for logs(end of DataProcessor).
1. and 2.:
You need to select the type of deduplication. It is not a big difference if you choose "table-" or "cdr-level-deduplication".
The ite.businessLogic.transformation.outputType does affect this. There is one Dedup only. You can not have both.
Select recordStream for "cdr-level-deduplication", do the transformation to table row format (e.g. if you like to use the TableFileWriter) in xxx.chainsink.custom::PostContextDataProcessor.
In xxx.chainsink.custom::PostContextDataProcessor you need to add custom code for duplicate-handling: reject (discard) tuples or set special column values or write them to different target tables.
3.:
Possibly reasons could be:
Missing forwarding of window punctuations or statistic tuple
error in BloomFilter configuration, you would see it easily because PE is down and error log gives hints about wrong sha2 functions be used
To troubleshoot your ITE application, I recommend to enable the following debug sinks if checking the StreamsStudio live graph is not sufficient:
ite.businessLogic.transformation.debug=on
ite.businessLogic.group.debug=on
ite.businessLogic.sink.debug=on
Run a test with a single input file only and check the flow of your record and statistic tuples. "Debug sinks" write punctuations markers also to debug files.

kafka-python 1.3.3: KafkaProducer.send with explicit key fails to send message to broker

(Possibly a duplicate of Can't send a keyedMessage to brokers with partitioner.class=kafka.producer.DefaultPartitioner, although the OP of that question didn't mention kafka-python. And anyway, it never got an answer.)
I have a Python program that has been successfully (for many months) sending messages to the Kafka broker, using essentially the following logic:
producer = kafka.KafkaProducer(bootstrap_servers=[some_addr],
retries=3)
...
msg = json.dumps(some_message)
res = producer.send(some_topic, value=msg)
Recently, I tried to upgrade it to send messages to different partitions based on a definite key value extracted from the message:
producer = kafka.KafkaProducer(bootstrap_servers=[some_addr],
key_serializer=str.encode,
retries=3)
...
try:
key = some_message[0]
except:
key = None
msg = json.dumps(some_message)
res = producer.send(some_topic, value=msg, key=key)
However, with this code, no messages ever make it out of the program to the broker. I've verified that the key value extracted from some_message is always a valid string. Presumably I don't need to define my own partitioner, since, according to the documentation:
The default partitioner implementation hashes each non-None key using the same murmur2 algorithm as the java client so that messages with the same key are assigned to the same partition.
Furthermore, with the new code, when I try to determine what happened to my send by calling res.get (to obtain a kafka.FutureRecordMetadata), that call throws a TypeError exception with the message descriptor 'encode' requires a 'str' object but received a 'unicode'.
(As a side question, I'm not exactly sure what I'd do with the FutureRecordMetadata if I were actually able to get it. Based on the kafka-python source code, I assume I'd want to call either its succeeded or its failed method, but the documentation is silent on the point. The documentation does say that the return value of send "resolves to" RecordMetadata, but I haven't been able to figure out, from either the documentation or the code, what "resolves to" means in this context.)
Anyway: I can't be the only person using kafka-python 1.3.3 who's ever tried to send messages with a partitioning key, and I have not seen anything on teh Intertubes describing a similar problem (except for the SO question I referenced at the top of this post).
I'm certainly willing to believe that I'm doing something wrong, but I have no idea what that might be. Is there some additional parameter I need to supply to the KafkaProducer constructor?
The fundamental problem turned out to be that my key value was a unicode, even though I was quite convinced that it was a str. Hence the selection of str.encode for my key_serializer was inappropriate, and was what led to the exception from res.get. Omitting the key_serializer and calling key.encode('utf-8') was enough to get my messages published, and partitioned as expected.
A large contributor to the obscurity of this problem (for me) was that the kafka-python 1.3.3 documentation does not go into any detail on what a FutureRecordMetadata really is, nor what one should expect in the way of exceptions its get method can raise. The sole usage example in the documentation:
# Asynchronous by default
future = producer.send('my-topic', b'raw_bytes')
# Block for 'synchronous' sends
try:
record_metadata = future.get(timeout=10)
except KafkaError:
# Decide what to do if produce request failed...
log.exception()
pass
suggests that the only kind of exception it will raise is KafkaError, which is not true. In fact, get can and will (re-)raise any exception that the asynchronous publishing mechanism encountered in trying to get the message out the door.
I also faced the same error. Once I added json.dumps while sending the key, it worked.
producer.send(topic="first_topic", key=json.dumps(key)
.encode('utf-8'), value=json.dumps(msg)
.encode('utf-8'))
.add_callback(on_send_success).add_errback(on_send_error)

Reading RawRequest JSON parameter value in SoapUI changes its value

I'm trying to transfer parameter from RawRequest using SoapUI but when reading it, value changes.
The parameter is request ID (which is unique for every test), it is requested by every test case from Custom Properties, where it is stored as follows:
${=((System.currentTimeMillis().toString()).subSequence(4, (System.currentTimeMillis().toString()).length())).replaceFirst("0", "")}
Above generates number like this for example:17879164.
The problem starts, when I'm trying to transfer it using either in build in feature or Groovy script. Both read parameter incorrectly:
Following is how the parameter presents in RawRequest window:
This is how it is read in Transfer window in SoapUI:
And finally, how it is read by Groovy script:
Can any one explain, why this value despite being shown in SoapUI RawRequest window as 17879164 is then read as 17879178 using two different methods?
I think the clue might be, that when I'm using "flat number" as reqId and not the generated one, both methods work fine and return correct number. But in this case when it is RawRequest, I understand that it is set once and for all, so what is show in the window and what is being read, should be the same.
What you are seeing is a "feature" in SoapUI. Your transfer step will transfer the code, which will then get evaluated again, resulting in a different value.
What you need to do is:
Create a test case property.
Set the property from test case setup script to a value. So in your case, something like testCase.setPropertyValue("your_property", ((System.currentTimeMillis().toString()).subSequence(4, (System.currentTimeMillis().toString()).length())).replaceFirst("0", ""))
Anywhere in your test refer to the test case property ${#TestCase#your_property}... which is a fixed value at this point, so will be always the same.

FBLOG_TRACE() No logging to Logfile -- FBLOG_INFO() logging OK -- What is the DIFFERENCE

FIREBREATH 1.6 -- VC2010 --
No logging with FBLOG_TRACE("StaticInitialize()", "INIT-trace");
settings
outMethods.push_back(std::make_pair(FB::Log::LogMethod_File, "U:/logs/PT.log"));
...
FB::Log::LogLevel getLogLevel(){
return FB::Log::LogLevel_Trace;
...
changing "FBLOG_TRACE" to "FBLOG_INFO" logging to Logfile works. I don´t understand the reason.
function not inserted in its respective area
FB::Log::LogLevel getLogLevel(){
return FB::Log::LogLevel_Trace; // Now Trace and above is logged.
}
Discription Logging here.
Enabling logging
...
regenerate your project using the prep* scripts
open up Factory.cpp in your project. You need to define the following function inside the class definition for PluginFactory:
...
About log levels
...
If you want to change the log level, you need to define the following in your Factory.cpp:
Referring to the above that means somewhere in "Factory.cpp". that´s incorrect. The description should say -->
If you want to change the log level, you need to define the following function inside the class definition for PluginFactory:
I drag it from bottom of "Factory.cpp" to inside Class PluginFactory.
Now it works as expected !!!
The entire purpose of having different log levels (FBLOG_FATAL, FBLOG_ERROR, FBLOG_WARN, FBLOG_INFO, FBLOG_DEBUG, FBLOG_TRACE) is so that you can configure which level to use and anything below that level is hidden. The default log level in FireBreath is FB::Log::LogLevel_Info, which means that nothing below INFO (such as DEBUG or TRACE) will be visible.
You can change this by overriding FB::FactoryBase::getLogLevel() in your Factory class to return FB::Log::LogLevel_Trace.
The method you'd be overriding is: https://github.com/firebreath/FireBreath/blob/master/src/PluginCore/FactoryBase.cpp#L78
The definition of the LogLevel enum:
https://github.com/firebreath/FireBreath/blob/master/src/ScriptingCore/logging.h#L69
There was a version of FireBreath in which this didn't work; I think it was fixed by 1.6.0, but I don't remember for certain. If that doesn't work try updating to the latest on the 1.6 branch (which is currently 1.6.1 as of the time of this writing but I haven't found time to release yet)

EntityFramework 4.1 Code First incorrectly names complex type column names

Say I have a table called Users, which contains your typical information: Id, Name, Street, City--much like in the example here:
http://weblogs.asp.net/manavi/archive/2010/12/11/entity-association-mapping-with-code-first-part-1-one-to-one-associations.aspx.
Among other things, this article states:
"Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type." My Address class meets these criteria: it's made up of strings and isn't used anywhere else.
In the app, though (I'm not sure if this makes any difference), we're calling the Users something else--say, Techs. I want to break out User's address columns into an Address so each Tech can have its own Address. According to the article above, EF should infer this and take care of the complex type automatically. What I'm getting,though, when the context attempts to give me a Tech, is the following exception:
System.Data.EntityCommandExecutionException: An error occurred while executing t
he command definition. See the inner exception for details. ---> System.Data.Sql
Client.SqlException: Invalid column name 'Address_Street'.
Invalid column name 'Address_City'.
Invalid column name 'Address_State'.
Invalid column name 'Address_Zip'.
It looks like it's trying to make sense of the Tech.Address property, but is giving each of its sub-properties the wrong name (e.g., "Address_City" instead of "City").
Any ideas on how I can rectify this?
That is correct behavior. Default convention always prefixes properties mapped to complex type with type name. If you want to use different column names you must map them either through data annotations:
public class Address
{
[Column("City")]
public string City { get; set; }
...
}
or through fluent API:
modelBuilder.ComplexType<Address>().Property(a => a.City).HasColumnName("City");