I noticed very strange behavior how the random() from any collection is working after I added dependency "implementation "androidx.lifecycle:lifecycle-viewmodel-compose:2.5.1"" in my project.
After addition of dependency all calling random() from any collection gives me the same set of results.
For example the following code will always give me the same numbers. I start the app. Making some taps on the text and see some set of numbers. Close the app and clear it from memory start again and see the same set of number. Even after re-installation I see the same set of numbers
var numbers by remember {
mutableStateOf("numbersFromSet")
}
Column(horizontalAlignment = Alignment.CenterHorizontally, verticalArrangement = Arrangement.Center) {
val setOfNumbers = setOf(1,2,3,4,5,6,7,8,9)
Text(text = numbers, modifier = Modifier.clickable {
numbers = setOfNumbers.random().toString()
})
}
It doesn't matter what collection I am using and where it's stored. It looks like after addition of the dependency the output of random() became predefined. I can see such behavior on the physical and virtual devices. After deletion of the dependency from gradle the random() is strating to work as expected. I always see the random results.
I am using android studio Chipmunk 2021.2.1 Patch 1 if it's matter
I will be very grateful for any answers
For your problem, you can use below solution to get different numbers every time.
It is just changing the seed for the Random every time you click.
numbers = setOfNumbers.random(Random(System.currentTimeMillis())).toString()
For the reason why Kotlin Random is giving the same numbers is because it uses the same seed, more details here and here.
I am not sure why this happens only with that specific dependency though. However the above solution will work when using that dependency too as the seed is unique.
I'm using Chrome's performance tab to study the performance of a page, and I occasionally get a warning like:
DevTools: CPU profile parser is fixing 4 missing samples.
Does anyone know what this means? Googling for this warning has returned no results so far...
As coming across with and having this situation, possible helpful things to consider are as below.
As Chrome 58 is released in 2017, some changes are done related to the analyzing performance. For example:
Timeline panel is renamed as Performance panel.
Profiles panel is renamed as Memory panel.
Record Javascript CPU Profile menu is moved into Dev Tools → Three dots at right → More tools → Javascript Profiler. (Old version of Javascript Profiler)
In addition of these, the warning message that is seen (DevTools: CPU profile parser is fixing N missing samples.) is being written to the console window when there is single (program) sample between two call stacks sharing the same bottom node. Also, the samples count should be greater than or equal to 3 according to the source code.
Comments written above _fixMissingSamples method in the CPUProfileDataModel.js file explains this situtation as below;
// Sometimes sampler is not able to parse the JS stack and returns
// a (program) sample instead. The issue leads to call frames belong
// to the same function invocation being split apart.
// Here's a workaround for that. When there's a single (program) sample
// between two call stacks sharing the same bottom node, it is replaced
// with the preceeding sample.
In the light of these information, we can trace the code and examine the behavior.
CPUProfileDataModel.js
let prevNodeId = samples[0];
let nodeId = samples[1];
let count = 0;
for (let sampleIndex = 1; sampleIndex < samplesCount - 1; sampleIndex++) {
const nextNodeId = samples[sampleIndex + 1];
if (nodeId === programNodeId && !isSystemNode(prevNodeId) && !isSystemNode(nextNodeId) &&
bottomNode(idToNode.get(prevNodeId)) === bottomNode(idToNode.get(nextNodeId)) {
++count;
samples[sampleIndex] = prevNodeId;
}
prevNodeId = nodeId;
nodeId = nextNodeId;
}
if (count) {
Common.console.warn(ls`DevTools: CPU profile parser is fixing ${count} missing samples.`);
}
It seems that, it simply compares previous and next node related to current node as if they has the same bottom node (actually comparing parent nodes). Also previous and next node shouldn't be a system (program/gc/idle function) node and current node should be the 'program' node. If it is the case, then the current node in samples array is set to previous node.
idle: Waiting to do process
program: Native code execution
garbage collector: Accounts for Garbage Collection
Also, disabling Javascript samples from Performance → Capture Settings result fewer details & call stacks because of omitting all the call stacks. The warning message shouldn't appear in this case.
But, since this warning is about the sampler that says cannot parse the JS stack and call frames being split apart, it doesn't seem very important thing to consider.
Resources:
https://github.com/ChromeDevTools/devtools-frontend/tree/master/front_end/sdk
https://github.com/ChromeDevTools/devtools-frontend/blob/master/front_end/sdk/CPUProfileDataModel.js
https://chromium.googlesource.com/chromium/blink/+/master/Source/devtools/front_end/sdk/
https://developers.google.com/web/tools/chrome-devtools/evaluate-performance
https://developers.google.com/web/updates/2016/12/devtools-javascript-cpu-profile-migration
I am currently developing geotiff reading and writing functions for octave using .oct files. I went through the octave documentation but could not find much on throwing exceptions. Does that mean I can throw exception the way I do it in C++ by just simply writing throw "error message"?
There are two ways, admittedly they are documented in two utterly separate places, not cross-linked/cross-referenced, which makes no sense, and if you didn't know the function/keyword you wouldn't find them:
error() raises an error, which stops the program. See 12.1 Raising Errors.
error("[%s] Here be wyrms", pkgname)
assert() both tests the condition then raises the error() with a customizable message (so don't do if (cond) ... error(...) ... endif).
See B.1 Test Functions.
% 1. Produce an error if the specified condition is zero (not met).
assert (cond)
assert (cond, errmsg)
assert (cond, errmsg, …)
assert (cond, msg_id, errmsg, …)
% 2a. Produce an error if observed (expression) is not the same as expected (expression); Note that observed and expected can be scalars, vectors, matrices, strings, cell arrays, or structures.
assert (observed, expected)
% 2b. a version that includes a (typically floating-point) tolerance
assert (observed, expected, tol)
See also the command fail()
Yes, you could just use something like
error ("mynewlib: Hello %s world!", "foo");
to signal errors which are catched and viewed.
(Personally I think such questions should really go to the GNU Octave mailing list where you'll find the core developers and octave-forge package maintainers).
I guess you want to build a wrapper around libgeotiff? Have a look at the octave-image package! Where do you host your code?
./examples/code/unwinddemo.cc might also be interesting for you. It shows how to use unwind_protect and define user error handlers.
http://hg.savannah.gnu.org/hgweb/octave/file/3b0a9a832360/examples/code/unwinddemo.cc
Perhaps your function should then be merged into the octave-forge mapping package: "http://sourceforge.net/p/octave/mapping/ci/default/tree/"
I'm using IronPython 2.6.2 for .NET 4.0 as a scripting platform within a C#/WPF application. Scripts can include their own function definitions, class definitions, etc. I'm not restricting what can be written.
A memory leak appeared in the scripting piece recently after a script change. After commenting out more and more code, we determined that defining and calling a function with more than 13 parameters causes a memory leak. So if you call a function with 14 parameters IronPython will leak.
Here is some sample code on a timer running every 100ms:
_Timer.Enabled = false;
try
{
var engine = Python.CreateEngine();
engine.Execute("def SomeFunc(paramI, paramII, paramIII, paramIV, paramV, paramVI, paramVII, paramVIII, paramIX, paramX, paramXI, paramXII, paramXIII, paramXIV):\r\n\tpass\r\nSomeFunc(1,2,3,4,5,6,7,8,9,10,11,12,13,14)");
//engine.Execute("def SomeFunc(paramI, paramII, paramIII, paramIV, paramV, paramVI, paramVII, paramVIII, paramIX, paramX, paramXI, paramXII, paramXIII):\r\n\tpass\r\nSomeFunc(1,2,3,4,5,6,7,8,9,10,11,12,13)");
// With and without the following line makes no difference
engine.Runtime.Shutdown();
this.Dispatcher.Invoke((Action)delegate()
{
this.Title = DateTime.Now.ToString();
});
}
catch (Exception)
{
}
_Timer.Enabled = true;
Note that I have a 14-parameter version of the script and below it is a commented-out 13-parameter version. The Python script is basically this:
def SomeFunc(paramI, paramII, paramIII, paramIV, paramV, paramVI, paramVII, paramVIII, paramIX, paramX, paramXI, paramXII, paramXIII, paramXIV):
pass
SomeFunc(1,2,3,4,5,6,7,8,9,10,11,12,13,14)
I've tried with and without engine.Runtime.Shutdown() but it makes no difference. The 14-parameter version's memory will climb rapidly and the 13-parameter version's memory will climb slightly and then stabilize.
Any thoughts?
Thanks
- Shaun
There's a magic number of parameters in IronPython - less than that is a different (faster) code path than more. It sounds like there are still some bugs in the fallback code. Can you please open an issue with a self-contained test case?
Looking at the latest code I would think the boundary would be at 15. Can you try again on 2.7 Beta 2 and see if the results are the same?
After reading "What’s your/a good limit for cyclomatic complexity?", I realize many of my colleagues were quite annoyed with this new QA policy on our project: no more 10 cyclomatic complexity per function.
Meaning: no more than 10 'if', 'else', 'try', 'catch' and other code workflow branching statement. Right. As I explained in 'Do you test private method?', such a policy has many good side-effects.
But: At the beginning of our (200 people - 7 years long) project, we were happily logging (and no, we can not easily delegate that to some kind of 'Aspect-oriented programming' approach for logs).
myLogger.info("A String");
myLogger.fine("A more complicated String");
...
And when the first versions of our System went live, we experienced huge memory problem not because of the logging (which was at one point turned off), but because of the log parameters (the strings), which are always calculated, then passed to the 'info()' or 'fine()' functions, only to discover that the level of logging was 'OFF', and that no logging were taking place!
So QA came back and urged our programmers to do conditional logging. Always.
if(myLogger.isLoggable(Level.INFO) { myLogger.info("A String");
if(myLogger.isLoggable(Level.FINE) { myLogger.fine("A more complicated String");
...
But now, with that 'can-not-be-moved' 10 cyclomatic complexity level per function limit, they argue that the various logs they put in their function is felt as a burden, because each "if(isLoggable())" is counted as +1 cyclomatic complexity!
So if a function has 8 'if', 'else' and so on, in one tightly-coupled not-easily-shareable algorithm, and 3 critical log actions... they breach the limit even though the conditional logs may not be really part of said complexity of that function...
How would you address this situation ?
I have seen a couple of interesting coding evolution (due to that 'conflict') in my project, but I just want to get your thoughts first.
Thank you for all the answers.
I must insist that the problem is not 'formatting' related, but 'argument evaluation' related (evaluation that can be very costly to do, just before calling a method which will do nothing)
So when a wrote above "A String", I actually meant aFunction(), with aFunction() returning a String, and being a call to a complicated method collecting and computing all kind of log data to be displayed by the logger... or not (hence the issue, and the obligation to use conditional logging, hence the actual issue of artificial increase of 'cyclomatic complexity'...)
I now get the 'variadic function' point advanced by some of you (thank you John).
Note: a quick test in java6 shows that my varargs function does evaluate its arguments before being called, so it can not be applied for function call, but for 'Log retriever object' (or 'function wrapper'), on which the toString() will only be called if needed. Got it.
I have now posted my experience on this topic.
I will leave it there until next Tuesday for voting, then I will select one of your answers.
Again, thank you for all the suggestions :)
With current logging frameworks, the question is moot
Current logging frameworks like slf4j or log4j 2 don't require guard statements in most cases. They use a parameterized log statement so that an event can be logged unconditionally, but message formatting only occurs if the event is enabled. Message construction is performed as needed by the logger, rather than pre-emptively by the application.
If you have to use an antique logging library, you can read on to get more background and a way to retrofit the old library with parameterized messages.
Are guard statements really adding complexity?
Consider excluding logging guards statements from the cyclomatic complexity calculation.
It could be argued that, due to their predictable form, conditional logging checks really don't contribute to the complexity of the code.
Inflexible metrics can make an otherwise good programmer turn bad. Be careful!
Assuming that your tools for calculating complexity can't be tailored to that degree, the following approach may offer a work-around.
The need for conditional logging
I assume that your guard statements were introduced because you had code like this:
private static final Logger log = Logger.getLogger(MyClass.class);
Connection connect(Widget w, Dongle d, Dongle alt)
throws ConnectionException
{
log.debug("Attempting connection of dongle " + d + " to widget " + w);
Connection c;
try {
c = w.connect(d);
} catch(ConnectionException ex) {
log.warn("Connection failed; attempting alternate dongle " + d, ex);
c = w.connect(alt);
}
log.debug("Connection succeeded: " + c);
return c;
}
In Java, each of the log statements creates a new StringBuilder, and invokes the toString() method on each object concatenated to the string. These toString() methods, in turn, are likely to create StringBuilder instances of their own, and invoke the toString() methods of their members, and so on, across a potentially large object graph. (Before Java 5, it was even more expensive, since StringBuffer was used, and all of its operations are synchronized.)
This can be relatively costly, especially if the log statement is in some heavily-executed code path. And, written as above, that expensive message formatting occurs even if the logger is bound to discard the result because the log level is too high.
This leads to the introduction of guard statements of the form:
if (log.isDebugEnabled())
log.debug("Attempting connection of dongle " + d + " to widget " + w);
With this guard, the evaluation of arguments d and w and the string concatenation is performed only when necessary.
A solution for simple, efficient logging
However, if the logger (or a wrapper that you write around your chosen logging package) takes a formatter and arguments for the formatter, the message construction can be delayed until it is certain that it will be used, while eliminating the guard statements and their cyclomatic complexity.
public final class FormatLogger
{
private final Logger log;
public FormatLogger(Logger log)
{
this.log = log;
}
public void debug(String formatter, Object... args)
{
log(Level.DEBUG, formatter, args);
}
… &c. for info, warn; also add overloads to log an exception …
public void log(Level level, String formatter, Object... args)
{
if (log.isEnabled(level)) {
/*
* Only now is the message constructed, and each "arg"
* evaluated by having its toString() method invoked.
*/
log.log(level, String.format(formatter, args));
}
}
}
class MyClass
{
private static final FormatLogger log =
new FormatLogger(Logger.getLogger(MyClass.class));
Connection connect(Widget w, Dongle d, Dongle alt)
throws ConnectionException
{
log.debug("Attempting connection of dongle %s to widget %s.", d, w);
Connection c;
try {
c = w.connect(d);
} catch(ConnectionException ex) {
log.warn("Connection failed; attempting alternate dongle %s.", d);
c = w.connect(alt);
}
log.debug("Connection succeeded: %s", c);
return c;
}
}
Now, none of the cascading toString() calls with their buffer allocations will occur unless they are necessary! This effectively eliminates the performance hit that led to the guard statements. One small penalty, in Java, would be auto-boxing of any primitive type arguments you pass to the logger.
The code doing the logging is arguably even cleaner than ever, since untidy string concatenation is gone. It can be even cleaner if the format strings are externalized (using a ResourceBundle), which could also assist in maintenance or localization of the software.
Further enhancements
Also note that, in Java, a MessageFormat object could be used in place of a "format" String, which gives you additional capabilities such as a choice format to handle cardinal numbers more neatly. Another alternative would be to implement your own formatting capability that invokes some interface that you define for "evaluation", rather than the basic toString() method.
In Python you pass the formatted values as parameters to the logging function. String formatting is only applied if logging is enabled. There's still the overhead of a function call, but that's minuscule compared to formatting.
log.info ("a = %s, b = %s", a, b)
You can do something like this for any language with variadic arguments (C/C++, C#/Java, etc).
This isn't really intended for when the arguments are difficult to retrieve, but for when formatting them to strings is expensive. For example, if your code already has a list of numbers in it, you might want to log that list for debugging. Executing mylist.toString() will take a while to no benefit, as the result will be thrown away. So you pass mylist as a parameter to the logging function, and let it handle string formatting. That way, formatting will only be performed if needed.
Since the OP's question specifically mentions Java, here's how the above can be used:
I must insist that the problem is not 'formatting' related, but 'argument evaluation' related (evaluation that can be very costly to do, just before calling a method which will do nothing)
The trick is to have objects that will not perform expensive computations until absolutely needed. This is easy in languages like Smalltalk or Python that support lambdas and closures, but is still doable in Java with a bit of imagination.
Say you have a function get_everything(). It will retrieve every object from your database into a list. You don't want to call this if the result will be discarded, obviously. So instead of using a call to that function directly, you define an inner class called LazyGetEverything:
public class MainClass {
private class LazyGetEverything {
#Override
public String toString() {
return getEverything().toString();
}
}
private Object getEverything() {
/* returns what you want to .toString() in the inner class */
}
public void logEverything() {
log.info(new LazyGetEverything());
}
}
In this code, the call to getEverything() is wrapped so that it won't actually be executed until it's needed. The logging function will execute toString() on its parameters only if debugging is enabled. That way, your code will suffer only the overhead of a function call instead of the full getEverything() call.
In languages supporting lambda expressions or code blocks as parameters, one solution for this would be to give just that to the logging method. That one could evaluate the configuration and only if needed actually call/execute the provided lambda/code block.
Did not try it yet, though.
Theoretically this is possible. I would not like to use it in production due to performance issues i expect with that heavy use of lamdas/code blocks for logging.
But as always: if in doubt, test it and measure the impact on cpu load and memory.
Thank you for all your answers! You guys rock :)
Now my feedback is not as straight-forward as yours:
Yes, for one project (as in 'one program deployed and running on its own on a single production platform'), I suppose you can go all technical on me:
dedicated 'Log Retriever' objects, which can be pass to a Logger wrapper only calling toString() is necessary
used in conjunction with a logging variadic function (or a plain Object[] array!)
and there you have it, as explained by #John Millikin and #erickson.
However, this issue forced us to think a little about 'Why exactly we were logging in the first place ?'
Our project is actually 30 different projects (5 to 10 people each) deployed on various production platforms, with asynchronous communication needs and central bus architecture.
The simple logging described in the question was fine for each project at the beginning (5 years ago), but since then, we has to step up. Enter the KPI.
Instead of asking to a logger to log anything, we ask to an automatically created object (called KPI) to register an event. It is a simple call (myKPI.I_am_signaling_myself_to_you()), and does not need to be conditional (which solves the 'artificial increase of cyclomatic complexity' issue).
That KPI object knows who calls it and since he runs from the beginning of the application, he is able to retrieve lots of data we were previously computing on the spot when we were logging.
Plus that KPI object can be monitored independently and compute/publish on demand its information on a single and separate publication bus.
That way, each client can ask for the information he actually wants (like, 'has my process begun, and if yes, since when ?'), instead of looking for the correct log file and grepping for a cryptic String...
Indeed, the question 'Why exactly we were logging in the first place ?' made us realize we were not logging just for the programmer and his unit or integration tests, but for a much broader community including some of the final clients themselves. Our 'reporting' mechanism had to be centralized, asynchronous, 24/7.
The specific of that KPI mechanism is way out of the scope of this question. Suffice it to say its proper calibration is by far, hands down, the single most complicated non-functional issue we are facing. It still does bring the system on its knee from time to time! Properly calibrated however, it is a life-saver.
Again, thank you for all the suggestions. We will consider them for some parts of our system when simple logging is still in place.
But the other point of this question was to illustrate to you a specific problem in a much larger and more complicated context.
Hope you liked it. I might ask a question on KPI (which, believe or not, is not in any question on SOF so far!) later next week.
I will leave this answer up for voting until next Tuesday, then I will select an answer (not this one obviously ;) )
Maybe this is too simple, but what about using the "extract method" refactoring around the guard clause? Your example code of this:
public void Example()
{
if(myLogger.isLoggable(Level.INFO))
myLogger.info("A String");
if(myLogger.isLoggable(Level.FINE))
myLogger.fine("A more complicated String");
// +1 for each test and log message
}
Becomes this:
public void Example()
{
_LogInfo();
_LogFine();
// +0 for each test and log message
}
private void _LogInfo()
{
if(!myLogger.isLoggable(Level.INFO))
return;
// Do your complex argument calculations/evaluations only when needed.
}
private void _LogFine(){ /* Ditto ... */ }
In C or C++ I'd use the preprocessor instead of the if statements for the conditional logging.
Pass the log level to the logger and let it decide whether or not to write the log statement:
//if(myLogger.isLoggable(Level.INFO) {myLogger.info("A String");
myLogger.info(Level.INFO,"A String");
UPDATE: Ah, I see that you want to conditionally create the log string without a conditional statement. Presumably at runtime rather than compile time.
I'll just say that the way we've solved this is to put the formatting code in the logger class so that the formatting only takes place if the level passes. Very similar to a built-in sprintf. For example:
myLogger.info(Level.INFO,"A String %d",some_number);
That should meet your criteria.
Conditional logging is evil. It adds unnecessary clutter to your code.
You should always send in the objects you have to the logger:
Logger logger = ...
logger.log(Level.DEBUG,"The foo is {0} and the bar is {1}",new Object[]{foo, bar});
and then have a java.util.logging.Formatter that uses MessageFormat to flatten foo and bar into the string to be output. It will only be called if the logger and handler will log at that level.
For added pleasure you could have some kind of expression language to be able to get fine control over how to format the logged objects (toString may not always be useful).
(source: scala-lang.org)
Scala has a annontation #elidable() that allows you to remove methods with a compiler flag.
With the scala REPL:
C:>scala
Welcome to Scala version 2.8.0.final (Java HotSpot(TM) 64-Bit Server VM, Java 1.
6.0_16).
Type in expressions to have them evaluated.
Type :help for more information.
scala> import scala.annotation.elidable
import scala.annotation.elidable
scala> import scala.annotation.elidable._
import scala.annotation.elidable._
scala> #elidable(FINE) def logDebug(arg :String) = println(arg)
logDebug: (arg: String)Unit
scala> logDebug("testing")
scala>
With elide-beloset
C:>scala -Xelide-below 0
Welcome to Scala version 2.8.0.final (Java HotSpot(TM) 64-Bit Server VM, Java 1.
6.0_16).
Type in expressions to have them evaluated.
Type :help for more information.
scala> import scala.annotation.elidable
import scala.annotation.elidable
scala> import scala.annotation.elidable._
import scala.annotation.elidable._
scala> #elidable(FINE) def logDebug(arg :String) = println(arg)
logDebug: (arg: String)Unit
scala> logDebug("testing")
testing
scala>
See also Scala assert definition
As much as I hate macros in C/C++, at work we have #defines for the if part, which if false ignores (does not evaluate) the following expressions, but if true returns a stream into which stuff can be piped using the '<<' operator.
Like this:
LOGGER(LEVEL_INFO) << "A String";
I assume this would eliminate the extra 'complexity' that your tool sees, and also eliminates any calculating of the string, or any expressions to be logged if the level was not reached.
Here is an elegant solution using ternary expression
logger.info(logger.isInfoEnabled() ? "Log Statement goes here..." : null);
Consider a logging util function ...
void debugUtil(String s, Object… args) {
if (LOG.isDebugEnabled())
LOG.debug(s, args);
}
);
Then make the call with a "closure" round the expensive evaluation that you want to avoid.
debugUtil(“We got a %s”, new Object() {
#Override String toString() {
// only evaluated if the debug statement is executed
return expensiveCallToGetSomeValue().toString;
}
}
);