FBLOG_TRACE() No logging to Logfile -- FBLOG_INFO() logging OK -- What is the DIFFERENCE - firebreath

FIREBREATH 1.6 -- VC2010 --
No logging with FBLOG_TRACE("StaticInitialize()", "INIT-trace");
settings
outMethods.push_back(std::make_pair(FB::Log::LogMethod_File, "U:/logs/PT.log"));
...
FB::Log::LogLevel getLogLevel(){
return FB::Log::LogLevel_Trace;
...
changing "FBLOG_TRACE" to "FBLOG_INFO" logging to Logfile works. I don´t understand the reason.

function not inserted in its respective area
FB::Log::LogLevel getLogLevel(){
return FB::Log::LogLevel_Trace; // Now Trace and above is logged.
}
Discription Logging here.
Enabling logging
...
regenerate your project using the prep* scripts
open up Factory.cpp in your project. You need to define the following function inside the class definition for PluginFactory:
...
About log levels
...
If you want to change the log level, you need to define the following in your Factory.cpp:
Referring to the above that means somewhere in "Factory.cpp". that´s incorrect. The description should say -->
If you want to change the log level, you need to define the following function inside the class definition for PluginFactory:
I drag it from bottom of "Factory.cpp" to inside Class PluginFactory.
Now it works as expected !!!

The entire purpose of having different log levels (FBLOG_FATAL, FBLOG_ERROR, FBLOG_WARN, FBLOG_INFO, FBLOG_DEBUG, FBLOG_TRACE) is so that you can configure which level to use and anything below that level is hidden. The default log level in FireBreath is FB::Log::LogLevel_Info, which means that nothing below INFO (such as DEBUG or TRACE) will be visible.
You can change this by overriding FB::FactoryBase::getLogLevel() in your Factory class to return FB::Log::LogLevel_Trace.
The method you'd be overriding is: https://github.com/firebreath/FireBreath/blob/master/src/PluginCore/FactoryBase.cpp#L78
The definition of the LogLevel enum:
https://github.com/firebreath/FireBreath/blob/master/src/ScriptingCore/logging.h#L69
There was a version of FireBreath in which this didn't work; I think it was fixed by 1.6.0, but I don't remember for certain. If that doesn't work try updating to the latest on the 1.6 branch (which is currently 1.6.1 as of the time of this writing but I haven't found time to release yet)

Related

Chisel randomly initialize register value when simulating with verilator

I'm using Chisel and blackbox to run my chisel logic against a verilog register file.
The registerfile does not have reset signal so I expect the register to be randomly initialized.
I passed the --x-initial unique to verilator,
Basically this is how I launch the test:
private val backendName = "verilator"
"NOCDMA" should s" do blkwrite and blkread correctly (with $backendName)" in {
Driver.execute(Array("--fint-write-vcd","--backend-name",s"$backendName",
"--more-vcs-flags","--trace-depth 1 --x-initial unique"),
()=>new DMANetworkWithMem(memAddrWidth,memDataWidth)(nocDataWidth)(nNodesX,nNodesY)){
c => new DMANetworkRWTest(c)
}
}
But The data I read from the register file is all zero before I wrote anything to it.
The read data is correct after I wrote to it.
So, is there anything inside chisel that I need to tune or I did not do everything properly ?
Any suggestions?
I'm not certain, but I found the following issue on Verilator with a similar issue: https://github.com/verilator/verilator/issues/1399.
From skimming the above issue, I think you also need to pass +verilator+seed+<value> and +verilator+rand+reset+<value> at runtime. I am not an expert in the iotesters, but I believe you can add these runtime values through the iotesters argument: --more-vcs-c-flags.
Side note, I would also set --x-assign unique in Verilator if there are cases in the Verilog where runtime would otherwise inject an X (eg. out-of-bounds index).
I hope this helps!

ConsoleLauncher returns 0 although class-under-test could not be loaded

We run a set of tests in a CI pipeline and call our test classes like this:
java -classpath junit-jupiter-api-5.0.1.jar:junit-platform-console-standalone-1.0.1.jar org.junit.platform.console.ConsoleLauncher --select-class xy.Test
If class xy.Test cannot be found on the classpath an error message appears but ConsoleLauncher's return value is 0! Since our CI system runs unattended the return value is the only important return value!
As I have seen this behaviour got updated in JUnit 5.0.0 M2 but I regard this as I mistake: If I define a class by --select-class and the class cannot be found then something has gone wrong!
As I countermeasure I hacked (by means of introspection) org.junit.platform.commons.util.BlacklistedExceptions by overwriting blacklist's field with OutOfMemoryError (=default) and PreconditionViolationException (=case where class could not be found).
(If the standard behaviour shall not be changed...) I think there should be a better way to get this behaviour!

Use logger name if MDC key missing

I am using logback with a third party package which sets an identifier in the MDC when its code is running. The rest of the time, this identifier is not set. So if I use a PatternLayout of [%X{id}] %m%n, then I see messages like
[Foo] Foo running
[Bar] Bar running
for messages related to the package. However, the rest of my log statements look like
[] Thing happened
The %X{id} is useful information when it exists, but I would like the logger name to be used when it is not. I tried
[%X{id:-%logger{20}}]
and
[%X{id:-logger{20}}]
but neither used the logger name as a default value.
I could write a custom layout that sets id to the logger name if it is not set, forwards to the layout and then clears field. Is there a simpler way to do this?

Using attributes in Chef

Just getting started with using chef recently. I gather that attributes are stored in one large monolithic hash named node that's available for use in your recipes and templates.
There seem to be multiple ways of defining attributes
Directly in the recipe itself
Under an attributes file - e.g. attributes/default.rb
In a JSON object that's passed to the chef-solo call. e.g. chef-solo -j web.json
Given the above 3, I'm curious
Are those all the ways attributes can be defined?
What's the order of precedence here? I'm assuming one of these methods supercedes the others
Is #3 (the JSON method) only valid for chef-solo ?
I see both node and default hashes defined. What's the difference? My best guess is that the default hash defined in attributes/default.rb gets merged into the node hash?
Thanks!
Your last question is probably the easiest to answer. In an attributes file you don't have to type 'node' so that this in attributes/default.rb:
default['foo']['bar']['baz'] = 'qux'
Is exactly the same as this in recipes/whatever.rb:
node.default['foo']['bar']['baz'] = 'qux'
In retrospect having different syntaxes for recipes and attributes is confusing, but this design choice dates back to extremely old versions of Chef.
The -j option is available to chef-client or chef-solo and will both set attributes. Note that these will be 'normal' attributes which are persistent in the node object and are generally not recommended to use. However, the 'run_list', 'chef_environment' and 'tags' on servers are implemented this way. It is generally not recommended to use other 'normal' attributes and to avoid node.normal['foo'] = 'bar' or node.set['foo'] = 'bar' in recipe (or attribute) files. The difference is that if you delete the node.normal line from the recipe the old setting on a node will persist, while if you delete a node.default setting out of a recipe then when your run chef-client on the node that setting will get deleted.
What happens in a chef-client run to make this happen is that at the start of the run the client issues a GET to get its old node document from the server. It then wipes the default, override and automatic(ohai) attributes while keeping the 'normal' attributes. The behavior of the default, override and automatic attributes makes the most sense -- you start over at the start of the run and then construct all the state, if its not in the recipe then you don't see a value there. However, normally the run_list is set on the node and nodes do not (often) manage their own run_list. In order to make the run_list persist it is a normal attribute.
The choice of the word 'normal' is unfortunate, as is the choice of 'node.set' setting 'normal' attributes. While those look like obvious choices to use to set attributes users should avoid using those. Again the problem is that they came first and were and are necessary and required for the run_list. Generally stick with default and override attributes only. And typically you can get most of your work done with default attributes, those should be preferred.
There's a big precedence level picture here:
https://docs.chef.io/attributes.html#attribute-precedence
That's the ultimate source of truth for attribute precedence.
That graph describes all the different ways that attributes can be defined.
The problem with Chef Attributes is that they've grown organically and sprouted many options to try to help out users who painted themselves into a corner. In general you should never need to touch automatic, normal, force_default or force_override levels of attributes. You should also avoid setting attributes in recipe code. You should move setting attributes in recipes to attribute files. What this leaves is these places to set attributes:
in the initial -j argument (sets normal attributes, you should limit using this to setting the run_state, over using this is generally smell)
in the role file as default or override precedence levels (careful with this one though because roles are not versioned and if you touch these attributes a lot you will cause production issues)
in the cookbook attributes file as default or override precedence levels (this is where you should set most of your attributes)
in environment files as default or override precedence levels (can be useful for settings like DNS servers in a datacenter, although you can use roles and/or cookbooks for this as well)
You also can set attributes in recipes, but when you do that you invariably wind up getting your next lesson in the two-phase compile-converge parser that runs through the Chef Recipes. If you have recipes that need to communicate with each other its better to use the node.run_state which is just a hash that doesn't get written as node attributes. You can drop node.run_state[:foo] = 'bar' in one recipe and read it in another. You probably will see recipes that set attributes though so you should be aware of that.
Hope That Helps.
When writing a cookbook, I visualize three levels of attributes:
Default values to converge successfully -- attributes/default.rb
Local testing override values -- JSON or .kitchen.yml (have you tried chef_zero using ChefDK and Kitchen?)
Environment/role override values -- link listed in lamont's answer: https://docs.chef.io/attributes.html#attribute-precedence

How can I avoid duplicated symbol errors when I use static libraries with Cocoapods?

I've got a executable target called Foobar, a static library holding some common code called FoobarCommon, and a test target specifically for the common code called FoobarCommonSpecs.
Unsurprisingly, I have made both Foobar and FoobarCommonSpecs depend on the FoobarCommon library.
The Podfile looks something like the below:
target 'FoobarCommon' do
pod 'ReactiveCocoa'
...
end
target 'Foobar' do # links against to FoobarCommon in Xcode
...
end
target 'FoobarCommonSpecs' do # links against to FoobarCommon in Xcode
pod 'LLReactiveMatchers', :git => 'https://github.com/lawrencelomax/LLReactiveMatchers.git'
end
LLReactiveMatchers is a Pod that depends on ReactiveCocoa.
Note that in this situation, ReactiveCocoa is prsent in both FoobarCommon and also in FoobarCommonSpecs
The Problem
Whenever I run FoobarCommonSpecs, I get many duplicate symbol errors for ReactiveCocoa.
I want to say to Cocoapods that it should just IGNORE LLReactiveMatcher's dependency on ReactiveCocoa. It should just let Xcode do its job and it should link with the copy of ReactiveCocoa found in FoobarCommon. How do I do that?
Does the link_with directive have anything to do with anything?