How to measure number of asserts per line of code in SonarQube - junit

We are attempting to get another view of our code coverage over the standad line and branch-coverage. We would like to get the number of asserts per line/method/class in order to see if we just are running though the code or if we are getting expected results.
So, how to measure the number of asserts in a codebase in sonarcube?

There are a product called pitest that answers to the goal of my question. And there is a plugin for sonar and pitest. So the answer to somehow verify if we actually checks for anything in the tests is: pitest.

Related

Zabbix Agent 3.4.9 Active Monitoring Log file, Not supported: too many parameters

I'm trying to monitor the log file: /var/log/neo4j/debug.log
I'm looking for the text: Application threads blocked for ######ms
I have devised a regular expression for this as: Application threads blocked for (\d+)ms
We want to skip old info: add skip as mode
I want to pull out the number of MS so that the trigger will alert on blockages > 150ms.: \1 must be set as output parameter
I constructed the key as:
log[/var/log/neo4j/debug.log,Application threads blocked for (\d+)ms,,,skip,\1]
in accordance with
log[/path/to/file/file_name,< regexp >,< encoding >,< maxlines >,< mode >,< output >,< maxdelay >]
Type of Information is: Log
Update interval: 30s
History storage period: 90d
Timestamps appear in the log file as: 2018-10-03 13:29:20.460+0000
My timestamp appears as: yyyypMMpddphhpmmpss
I have tried a bunch of different things over the past week trying to get it to stop showing a "Too Many Parameters" error in the GUI without success. I'm completely lost at this point. We have 49 other items working correctly (all others are passive). Active checks are enabled in zabbix_agentd.conf.
I know this is an old thread but it took me a while to solve this problem, so I'd like to share and hope it helps...
According to the Zabbix official documentation the parameters usage for log (and logrt) keys should be:
logrt[file_regexp,<regexp>,<encoding>,<maxlines>,<mode>,<output>,<maxdelay>]
So, if we would use only the "skip" parameter, the item key should look like:
logrt[MyLogFile.log,,,,skip,,]
Nevertheless, it triggers the error "too many parameters".
In fact, to solve this issue I configured this key in my environment with only one coma after the parameter, like this:
logrt["MyLogFile.log","MyFilter",,,skip,]
That's it... hope it helps someone else.

MediaWiki not returning continue parameter

I'm using this API request:
https://en.wikipedia.org/w/api.php?action=query&list=geosearch&gsradius=10000&gscoord=51.540951897949|-0.051086739997922&format=json&gslimit=50&continue=
which delivers 50 results. I want to use the 'continue' parameter to get the next page of results. According to the documentation I should get a continue field back in the results. I don't get any such result so can't get the next page.
Does anyone have any suggestions?
Dave, as #svick says, it seems list=geosearch (which is part of extension:GeoData) does not support continuation; indeed, it actually returns a "batchcomplete" element to indicate no more results (see in human-readable form).
I think you should either just get the maximum number of results (500 for users, 5000 for bots on Wikipedia), or if that's not satisfactory for your use case (which is?), pipe in at task T78703.
(Or, if you believe it to be a separate issue, report a new bug.

How to test whether log compaction is working or not in Kafka?

I have made the changes in server.properties file in Kafka 0.8.1.1 i.e. added log.cleaner.enable=true and also enabled cleanup.policy=compact while creating the topic.
Now when I am testing it, I pushed the following messages to the topic with following (Key, Message).
Offset: 1 - (123, abc);
Offset: 2 - (234, def);
Offset: 3 - (345, ghi);
Offset: 4 - (123, changed)
Now I pushed the 4th message with a same key as an earlier input, but changed the message. Here log compaction should come into picture. And using Kafka tool, I can see all the 4 offsets in the topic. How can I know whether log compaction is working or not? Should the earlier message be deleted, or the log compaction is working fine as the new message has been pushed.
Does it have to do anything with the log.retention.hours or topic.log.retention.hours or log.retention.size configurations? What is the role of these configs in log compaction.
P.S. - I have thoroughly gone through the Apache Documentation, but still it is not clear.
even though this question is a few months old, I just came across it doing research for my own question. I had created a minimal example for seeing how compaction works with Java, maybe it is helpful for you too:
https://gist.github.com/anonymous/f78184eaeec3ee82b15182aec24a432a
Furthermore, consulting the documentation, I used the following configuration on a topic level for compaction to kick in as quickly as possible:
min.cleanable.dirty.ratio=0.01
cleanup.policy=compact
segment.ms=100
delete.retention.ms=100
When run, this class shows that compaction works - there is only ever one message with the same key on the topic.
With the appropriate settings, this would be reproducible on command line.
Actually, the log compaction is visible only when the number of logs reaches to a very high count eg 1 million. So, if you have that much data, it's good. Otherwise, using configuration changes, you can reduce this limit to say 100 messages, and then you can see that out of the messages with the same keys, only the latest message will be there, the previous one will be deleted. It is better to use log compaction if you have full snapshot of your data everytime, otherwise you may loose the previous logs with the same associated key, which might be useful.
In order check a Topics property from CLI you can do it using Kafka-topics cmd :
https://grokbase.com/t/kafka/users/14aev0snbd/command-line-tool-for-topic-metadata
It is a good point to take a look also on log.roll.hours, which by default is 168 hours. In simple words: even in case you have not so active topic and you are not able to fill the max segment size (by default 1G for normal topics and 100M for offset topic) in a week you will have a closed segment with size below log.segment.bytes. This segment can be compacted on next turn.
You can do it with kafka-topics CLI.
I'm running it from docker(confluentinc/cp-enterprise-kafka:6.0.0).
$ docker-compose exec kafka kafka-topics --zookeeper zookeeper:32181 --describe --topic count-colors-output
Topic: count-colors-output PartitionCount: 1 ReplicationFactor: 1 Configs: cleanup.policy=compact,segment.ms=100,min.cleanable.dirty.ratio=0.01,delete.retention.ms=100
Topic: count-colors-output Partition: 0 Leader: 1 Replicas: 1 Isr: 1
but don't get confused if you don't see anything in Config field. It happens if default values were used. So, unless you see cleanup.policy=compact in the output - the topic is not compacted.

Amending a condition as the stack unwinds in Common Lisp

I'm working on some lisp code that munges a CVS repository in order to get it into shape for a git conversion. As such, when something goes wrong it might not be a bug in my code but might instead be an inaccuracy in the input data which needs fixing.
Deep down in the bowels of my code, I'll have (say) some function that merges two date ranges by taking an intersection. If that intersection turns out to be empty, I'll raise an error. This is all good, but my "merge dates" function is missing lots of the information that the user (me!) needs in order to figure out what went wrong. For example, which CVS master file ("foo,v") was I working on? What branch was I thinking about? Etc. etc.
My partial solution to this problem is to handle and re-raise the error. For example, this sort of code:
(handler-case
(do-something)
(unclear-graft-point (c)
(setf (slot-value c 'master) master)
(setf (slot-value c 'branch) branch)
(error c)))
which sets some extra slots with useful info before re-raising the error.
The report function for the condition then checks whether those slots have been set and, if so, will use them to give a more helpful error message.
Unfortunately, the backtrace that I get stops at the top-level re-raise. That makes sense: it's the code I wrote. But it's not really what I want...
Is there a way in Common Lisp to "annotate" a condition as it hurtles past, without re-raising it and, hence, partially unwinding the stack without reporting that in the back trace?
I gave a reasonable amount of background info about what I'm doing in the hope that, if this isn't possible, someone will say "Ah, what you should do in this situation is...": am I just going about this the wrong way?
Yes, there is a Common Lisp technology applicapble to your case - it is called HANDLER-BIND. You can find a detailed explanation and use cases for it alongside other condition-handling machinery in Peter Seibel's Practical Common Lisp Chapter 19. Conditions and Restarts.

Catch Mathematica warnings/errors without displaying them

I have a problem involving NDSolve in Mathematica, which I run multiple times with different values of the parameters. For some of these values, the solution results in singularities and NDSolve warns with NDSolve::ndsz or other related warnings.
I would simply like to catch these warnings, suppress their display, and just keep track of the fact that a problem occurred for these particular values of the parameters. I thought of the following options (neither of which really do the trick):
I know I can determine whether a command has resulted in a warning or error by using Check. However, that will still display the warning. If I turn it off with Off the Check fails to report the warning too.
It is possible to stop NDSolve using the EventLocator method, so I could check for very large values of the function or its derivatives and stop evaluation in that case. However, in practice, this still produces warnings from time to time, presumably because the step size can sometimes be so large that the NDSolve warning triggers before my Event has taken place.
Any other suggestions?
If you wrap the Check with Quiet then I believe that everything should work as you want. For example, you can suppress the specific message Power::indet
In[1]:= Quiet[Check[0^0,err,Power::indet],Power::indet]
Out[1]= err
but other messages are still displayed
In[2]:= Quiet[Check[Sin[x,y],err,Power::indet],Power::indet]
During evaluation of In[2]:= Sin::argx: Sin called with 2 arguments; 1 argument is expected. >>
Out[2]= Sin[x,y]
Using Quiet and Check together works:
Quiet[Check[Table[1/Sin[x], {x, 0, \[Pi], \[Pi]}], $Failed]]
Perhaps you wish to redirect messages? This is copied almost verbatim from that page.
stream = OpenWrite["msgtemp.txt"];
$Messages = {stream};
1/0
FilePrint["msgtemp.txt"]