Exceptions in Yesod - json

I had made a daemon that used a very primitive form of ipc (telnet and send a String that had certain words in a certain order). I snapped out of it and am now using JSON to pass messages to a Yesod server. However, there were some things I really liked about my design, and I'm not sure what my choices are now.
Here's what I was doing:
buildManager :: Phase -> IO ()
buildManager phase = do
let buildSeq = findSeq phase
jid = JobID $ pack "8"
config = MkConfig $ Just jid
flip C.catch exceptionHandler $
runReaderT (sequence_ $ buildSeq <*> stages) config
-- ^^ I would really like to keep the above line of code, or something like it.
return ()
each function in buildSeq looked like this
foo :: Stage -> ReaderT Config IO ()
data Config = MkConfig (Either JobID Product) BaseDir JobMap
JobMap is a TMVar Map that tracks information about current jobs.
so now, what I have are Handlers, that all look like this
foo :: Handler RepJson
foo represents a command for my daemon, each handler may have to process a different JSON object.
What I would like to do is send one JSON object that represents success, and another JSON object that espresses information about some exception.
I would like foos helper function to be able to return an Either, but I'm not sure how I get that, plus the ability to terminate evaluation of my list of actions, buildSeq.
Here's the only choice I see
1) make sure exceptionHandler is in Handler. Put JobMap in the App record. Using getYesod alter the appropriate value in JobMap indicating details about the exception,
which can then be accessed by foo
Is there a better way?
What are my other choices?
Edit: For clarity, I will explain the role ofHandler RepJson. The server needs some way to accept commands such as build stop report. The client needs some way of knowing the results of these commands. I have chosen JSON as the medium with which the server and client communicate with each other. I'm using the Handler type just to manage the JSON in/out and nothing more.

Philosophically speaking, in the Haskell/Yesod world you want to pass the values forward, rather than return them backwards. So instead of having the handlers return a value, have them call forwards to the next step in the process, which may be to generate an exception.
Remember that you can bundle any amount of future actions into a single object, so you can pass a continuation object to your handlers and foos that basically tells them, "After you are done, run this blob of code." That way they can be void and return nothing.

Related

How to run sap r/3 transactions through JCO3? or execute reports through JCO?

If I log in SAP R/3 and execute the transaction code MM60 then it will show some UI screen for Material list and ask for material number. If I specify a material number and execute then it will show me the output i.e. material list.
Here the story ends if I am a SAP R/3 user.
But what if I want to do the same above steps using java program and get the result in java itself instead of going to SAP R/3? I want to do this basically because I want to use that output data for BI tool.
Suppose I am using JCO3 for connection with R/3.
EDIT
Based on the info in the link I tried to do something like below code but it does not schedule any job in background nor it downloads any spool file, etc.
I've manually sent a doc to spool and tried giving its ID in the code. This is for MM60.
JCoContext.begin(destination);
function = mRepository.getFunction("BAPI_XBP_JOB_OPEN");
JCoParameterList input = function.getImportParameterList();
input.setValue("JOBNAME", "jb1");
input.setValue("EXTERNAL_USER_NAME", "sap*");
function.execute(destination);
JCoFunction function2 = mRepository.getFunction("BAPI_XBP_JOB_ADD_ABAP_STEP");
function2.getImportParameterList().setValue("JOBNAME", "jb1");
function2.getImportParameterList().setValue("EXTERNAL_USER_NAME", "sap*");
function2.getImportParameterList().setValue("ABAP_PROGRAM_NAME", "RMMVRZ00");
function2.getImportParameterList().setValue("ABAP_VARIANT_NAME", "KRUGMANN");
function2.getImportParameterList().setValue("SAP_USER_NAME", "sap*");
function2.getImportParameterList().setValue("LANGUAGE", destination.getLanguage());
function2.execute(destination);
function3.getImportParameterList().setValue("JOBNAME", "jb1");
function3.getImportParameterList().setValue("EXTERNAL_USER_NAME", "sap*");
function3.getImportParameterList().setValue("EXT_PROGRAM_NAME", "RMMVRZ00");
function3.getImportParameterList().setValue("SAP_USER_NAME", "sap*");
function3.execute(destination);
JCoFunction function4 = mRepository.getFunction("BAPI_XBP_JOB_CLOSE");
function4.getImportParameterList().setValue("JOBNAME", "jb1");
function4.getImportParameterList().setValue("EXTERNAL_USER_NAME", "sap*");
function4.execute(destination);
JCoFunction function5 = mRepository.getFunction("BAPI_XBP_JOB_START_ASAP");
function5.getImportParameterList().setValue("JOBNAME", "jb1");
function5.getImportParameterList().setValue("EXTERNAL_USER_NAME", "sap*");
function5.execute(destination);
JCoFunction function6 = mRepository.getFunction("RSPO_DOWNLOAD_SPOOLJOB");
function6.getImportParameterList().setValue("ID", "31801");
function6.getImportParameterList().setValue("FNAME", "abc");
function6.execute(destination);
You cannot execute an SAP transaction through JCo. What you can do, is run remote-enabled function modules. So you need to either write a function module of your own, providing exactly the functionality you require, or find an SAP function module, that does what you need (or close enough to be useful).
Your code has the following issues:
XBP BAPIs can only be used if you declare their usage via BAPI_XMI_LOGON and BAPI_XMI_LOGOFF. Pass the parameters interface = 'XBP', version = '3.0', extcompany = 'any name you want'.
You start the program RMMVRZ00 (which corresponds to the program directly behind the transaction code MM60) with the program variant KRUGMANN which is defined at SAP side with a given material number, but your goal is probably to pass a varying material number, so you should first change the material number in the program variant via BAPI_XBP_VARIANT_CHANGE.
After calling BAPI_XBP_JOB_OPEN, you should read the returned value of the JOBCOUNT parameter, and pass it to all subsequent BAPI_XBP_JOB_* calls, along with JOBNAME (I mean, two jobs may be named identically, JOBCOUNT is there to identify the job uniquely).
After calling BAPI_XBP_JOB_START_ASAP, you should wait for the job to be finished, by repeatedly calling BAPI_XBP_JOB_STATUS_GET until the job status is A (aborted) or F (finished successfully).
You hardcode the spool number generated by the program. To retrieve the spool number, you may call BAPI_XBP_JOB_SPOOLLIST_READ which returns all spool data of the job.
Moreover I'm not sure whether you may call the function module RSPO_DOWNLOAD_SPOOLJOB to download the spool data to a file on your java computer. If it doesn't work, you may use the spool data returned by BAPI_XBP_JOB_SPOOLLIST_READ and do whatever you want.
In short, I think that the sequence should be:
BAPI_XMI_LOGON
BAPI_XBP_VARIANT_CHANGE
BAPI_XBP_JOB_OPEN
BAPI_XBP_JOB_ADD_ABAP_STEP
BAPI_XBP_JOB_CLOSE
BAPI_XBP_JOB_START_ASAP
Calling repeatedly BAPI_XBP_JOB_STATUS_GET until status is A or F
Note that it may take some time if there are many jobs waiting in the SAP queue
BAPI_XBP_JOB_SPOOLLIST_READ
Eventually RSPO_DOWNLOAD_SPOOLJOB if it works
BAPI_XMI_LOGOFF
Eventually BAPI_TRANSACTION_COMMIT because XMI writes an XMI log.

Saving JSON data to DB in Zotonic

Im trying to write a small app that retrieves a JSON file (it contains a list of items, which all have some properties), saves its contents to the DB and then displays some of it later on. I have Zotonic up and running, and generating some HTML is no problem.
ATM i'm stuck trying to figure out how to define a custom resource and how to get the data from the JSON in the DB. When the data is there I should be fine, that part seems covered ok by the documentation.
I wrote some standalone erlang scripts that fetch the data and I noticed that Zotonic has a library for decoding JSON so that part should be fine. Any tips on where to put which code or where to look further?
The z_db module allows for creating custom tables by using:
z_db:create_table(Table, Cols, Context).
The Table variable is your table name which can be either an atom or a list containing a single atom.
The Cols is a list of column definitions, which are defined by records. Currently the record definition (you can find this in include/zotonic.hrl) is:
-record(column_def, {name, type, length, is_nullable=true, default, primary_key}).
See Erlang docs on records for more info on records
Example code which I put in users/sites/[sitename]/models/m_[sitename].erl:
init(Context) ->
case z_db:table_exists(?table,Context) of
false ->
z_db:create_table(tablename,
[
#column_def{name=id, type="serial"},
#column_def{name=gid, type="integer", is_nullable=false},
#column_def{name=magnitude, type="real"},
#column_def{name=depth, type="real"},
#column_def{name=location, type="character varying"},
#column_def{name=time, type="integer"},
#column_def{name=date, type="integer"}
], Context);
true -> ok
end,
ok.
Pay attention to what options of the record you specify. Most of the errors I got were e.g. from specifying a length on the integer fields.
The models/m_sitename:init/1 does not get called on site start. The sitename:init/1 does get called so I call the init function there to ensure the table exists. Example:
init(Context) ->
m_sitename:init(Context).
It is called by Zotonic with the Context variable of the site automatically. You can get this variable manually as well with z:c(sitename).. So if you call the m_sitename:init(Context). from somewhere else you would do:
m_sitename:init(z:c(sitename)).
Next, insertion in the DB can be done with:
z_db:insert(Table, PropList, Context).
Where Table is again an atom or a list containing a single atom representing the table name. Context is the same as above.
PropList is a property list which is a list containing tuples consisting of two elements where the first is an atom and the second is its associated value/property. Example:
PropList = [
{row, Value},
{anotherrow, AnotherValue}
].
Table = tablename.
Context = z:c(sitename).
z_db:insert(Table, PropList, Context).
See Erlang docs on Property Lists for more info on property lists.
=== The dependencies have been updated so if you build from source the step directly below is no longer needed ===
The JSON part is bit more tricky. Included with Zotonic are mochijson2 and as a secondary dependency also jiffy. The latest version of jiffy contains jiffy:decode/2 which allows you to specify maps as a return type. Much more readable than the standard {struct, {struct, <<"">>}} monster. To update to the latest version edit the line in deps/twerl/rebar.config that says
{jiffy, ".*", {git, "https://github.com/davisp/jiffy.git", {tag, "0.8.3"}}},
to
{jiffy, ".*", {git, "https://github.com/davisp/jiffy.git", {tag, "0.14.3"}}},
Now run z:m(). in the Zotonic shell. (you must do this after every change in your code).
Now check in the Zotonic shell if there is a jiffy:decode/2 available by typing jiffy: <tab>, it will show a list of available functions and their arity.
To retrieve a JSON file from the internet run:
{ok, {{_, 200, _}, _, Body}} = httpc:request(get, {"url-to-JSON-here", []}, [], [])
Which will yield the variable Body with the contents. See Erlang docs on http client for more info on this call.
Next convert the contents of Body to Erlang terms with:
JsonData = jiffy:decode(Body, [return_maps]).
What you have to do next depends a lot on the structure of your JSON resource. Keep in mind that everything is now in binary UTF-8 encoded strings! If you print JsonData to screen (just enter JsonData. in your Zotonic/Erlang shell) you will see a lot of #map{<<"key"", <<"Value">>} this.
My data was nested so I had to extract the needed data like this:
[{_,ItemList}|_] = ListData.
This gave me a list of maps, and in order to deal with them as individual items I used the following function:
get_maps([]) ->
done;
get_maps([First|Rest]) ->
Map = maps:get(<<"properties">>, First),
case is_map(Map) of
true ->
map_to_proplist(Map),
get_maps(Rest);
false -> done
end,
done;
get_maps(_) ->
done.
As you might remember, the z_db:insert/3 function needs a property list to populate rows, so that what the call to map_to_proplist/1 is for. How this function looks is completely dependent on how your data looks but as an example here is what worked for me:
map_to_proplist(Map) ->
case is_map(Map) of
true ->
{Value1,_} = string:to_integer(binary_to_list(maps:get(<<"key1">>, Map))),
{Value2,_} = string:to_float(binary_to_list(maps:get(<<"key2">>, Map))),
{Value3,_} = string:to_float(binary_to_list(maps:get(<<"key3">>, Map))),
Value4 = binary_to_list(maps:get(<<"key4">>, Map)),
{Value5,_} = string:to_integer(binary_to_list(maps:get(<<"key5">>, Map))),
{Value6,_} = string:to_integer(binary_to_list(maps:get(<<"key6">>, Map))),
PropList = [{rowname1, Value1}, {rowname2, Value2}, {rowname3, Value3}, {rowname4, Value4}, {rowname5, Value5}, {rowname6, Value6}],
m_sitename:insert_items(PropList,z:c(sitename)),
ok;
false ->
ok
end.
See the documentation on string:to_list/1 as to why the tuples are needed when casting. The call to m_sitename:insert_items(PropList,z:c(sitename)) calls the z_db:insert/3 in models/m_sitename.erl but wrapped in a catch:
insert_items(PropList,Context) ->
(catch z_db:insert(?table, PropList, Context)).
Ok, quite a long post but this should get you up and running if you were looking for this answer.
The above was done with Zotonic 0.13.2 on Erlang/OTP 18.
A repost (except the JSON part) of my post in the Zotonic Developers group.

Is currying just a way to avoid inheritance?

So my understanding of currying (based on SO questions) is that it lets you partially set parameters of a function and return a "truncated" function as a result.
If you have a big hairy function takes 10 parameters and looks like
function (location, type, gender, jumpShot%, SSN, vegetarian, salary) {
//weird stuff
}
and you want a "subset" function that will let you deal with presets for all but the jumpShot%, shouldn't you just break out a class that inherits from the original function?
I suppose what I'm looking for is a use case for this pattern. Thanks!
Currying has many uses. From simply specifying default parameters for functions you use often to returning specialized functions that serve a specific purpose.
But let me give you this example:
function log_message(log_level, message){}
log_error = curry(log_message, ERROR)
log_warning = curry(log_message, WARNING)
log_message(WARNING, 'This would be a warning')
log_warning('This would also be a warning')
In javascript I do currying on callback functions (because they cannot be passed any parameters after they are called (from the caller)
So something like:
...
var test = "something specifically set in this function";
onSuccess: this.returnCallback.curry(test).bind(this),
// This will fail (because this would pass the var and possibly change it should the function be run elsewhere
onSuccess: this.returnCallback.bind(this,test),
...
// this has 2 params, but in the ajax callback, only the 'ajaxResponse' is passed, so I have to use curry
returnCallback: function(thePassedVar, ajaxResponse){
// now in here i can have 'thePassedVar', if
}
I'm not sure if that was detailed or coherent enough... but currying basically lets you 'prefill' the parameters and return a bare function call that already has data filled (instead of requiring you to fill that info at some other point)
When programming in a functional style, you often bind arguments to generate new functions (in this example, predicates) from old. Pseudo-code:
filter(bind_second(greater_than, 5), some_list)
might be equivalent to:
filter({x : x > 5}, some_list)
where {x : x > 5} is an anonymous function definition. That is, it constructs a list of all values from some_list which are greater than 5.
In many cases, the parameters to be omitted will not be known at compile time, but rather at run time. Further, there's no limit to the number of curried delegates that may exist for a given function. The following is adapted from a real-world program.
I have a system in which I send out command packets to a remote machine and receive back response packets. Every command packet has an index number, and each reply bears the index number of the command to which it is a response. A typical command, translated into English, might be "give me 128 bytes starting at address 0x12300". A typical response might be "Successful." along with 128 bytes of data.
To handle communication, I have a routine which accepts a number of command packets, each with a delegate. As each response is received, the corresponding delegate will be run on the received data. The delegate associated with the command above would be something like "Confirm that I got a 'success' with 128 bytes of data, and if so, store them into my buffer at address 0x12300". Note that multiple packets may be outstanding at any given time; the curried address parameter is necessary for the routine to know where the incoming data should go. Even if I wanted to write a "store data to buffer" routine which didn't require an address parameter, it would have no way of knowing where the incoming data should go.

Conditional logging with minimal cyclomatic complexity

After reading "What’s your/a good limit for cyclomatic complexity?", I realize many of my colleagues were quite annoyed with this new QA policy on our project: no more 10 cyclomatic complexity per function.
Meaning: no more than 10 'if', 'else', 'try', 'catch' and other code workflow branching statement. Right. As I explained in 'Do you test private method?', such a policy has many good side-effects.
But: At the beginning of our (200 people - 7 years long) project, we were happily logging (and no, we can not easily delegate that to some kind of 'Aspect-oriented programming' approach for logs).
myLogger.info("A String");
myLogger.fine("A more complicated String");
...
And when the first versions of our System went live, we experienced huge memory problem not because of the logging (which was at one point turned off), but because of the log parameters (the strings), which are always calculated, then passed to the 'info()' or 'fine()' functions, only to discover that the level of logging was 'OFF', and that no logging were taking place!
So QA came back and urged our programmers to do conditional logging. Always.
if(myLogger.isLoggable(Level.INFO) { myLogger.info("A String");
if(myLogger.isLoggable(Level.FINE) { myLogger.fine("A more complicated String");
...
But now, with that 'can-not-be-moved' 10 cyclomatic complexity level per function limit, they argue that the various logs they put in their function is felt as a burden, because each "if(isLoggable())" is counted as +1 cyclomatic complexity!
So if a function has 8 'if', 'else' and so on, in one tightly-coupled not-easily-shareable algorithm, and 3 critical log actions... they breach the limit even though the conditional logs may not be really part of said complexity of that function...
How would you address this situation ?
I have seen a couple of interesting coding evolution (due to that 'conflict') in my project, but I just want to get your thoughts first.
Thank you for all the answers.
I must insist that the problem is not 'formatting' related, but 'argument evaluation' related (evaluation that can be very costly to do, just before calling a method which will do nothing)
So when a wrote above "A String", I actually meant aFunction(), with aFunction() returning a String, and being a call to a complicated method collecting and computing all kind of log data to be displayed by the logger... or not (hence the issue, and the obligation to use conditional logging, hence the actual issue of artificial increase of 'cyclomatic complexity'...)
I now get the 'variadic function' point advanced by some of you (thank you John).
Note: a quick test in java6 shows that my varargs function does evaluate its arguments before being called, so it can not be applied for function call, but for 'Log retriever object' (or 'function wrapper'), on which the toString() will only be called if needed. Got it.
I have now posted my experience on this topic.
I will leave it there until next Tuesday for voting, then I will select one of your answers.
Again, thank you for all the suggestions :)
With current logging frameworks, the question is moot
Current logging frameworks like slf4j or log4j 2 don't require guard statements in most cases. They use a parameterized log statement so that an event can be logged unconditionally, but message formatting only occurs if the event is enabled. Message construction is performed as needed by the logger, rather than pre-emptively by the application.
If you have to use an antique logging library, you can read on to get more background and a way to retrofit the old library with parameterized messages.
Are guard statements really adding complexity?
Consider excluding logging guards statements from the cyclomatic complexity calculation.
It could be argued that, due to their predictable form, conditional logging checks really don't contribute to the complexity of the code.
Inflexible metrics can make an otherwise good programmer turn bad. Be careful!
Assuming that your tools for calculating complexity can't be tailored to that degree, the following approach may offer a work-around.
The need for conditional logging
I assume that your guard statements were introduced because you had code like this:
private static final Logger log = Logger.getLogger(MyClass.class);
Connection connect(Widget w, Dongle d, Dongle alt)
throws ConnectionException
{
log.debug("Attempting connection of dongle " + d + " to widget " + w);
Connection c;
try {
c = w.connect(d);
} catch(ConnectionException ex) {
log.warn("Connection failed; attempting alternate dongle " + d, ex);
c = w.connect(alt);
}
log.debug("Connection succeeded: " + c);
return c;
}
In Java, each of the log statements creates a new StringBuilder, and invokes the toString() method on each object concatenated to the string. These toString() methods, in turn, are likely to create StringBuilder instances of their own, and invoke the toString() methods of their members, and so on, across a potentially large object graph. (Before Java 5, it was even more expensive, since StringBuffer was used, and all of its operations are synchronized.)
This can be relatively costly, especially if the log statement is in some heavily-executed code path. And, written as above, that expensive message formatting occurs even if the logger is bound to discard the result because the log level is too high.
This leads to the introduction of guard statements of the form:
if (log.isDebugEnabled())
log.debug("Attempting connection of dongle " + d + " to widget " + w);
With this guard, the evaluation of arguments d and w and the string concatenation is performed only when necessary.
A solution for simple, efficient logging
However, if the logger (or a wrapper that you write around your chosen logging package) takes a formatter and arguments for the formatter, the message construction can be delayed until it is certain that it will be used, while eliminating the guard statements and their cyclomatic complexity.
public final class FormatLogger
{
private final Logger log;
public FormatLogger(Logger log)
{
this.log = log;
}
public void debug(String formatter, Object... args)
{
log(Level.DEBUG, formatter, args);
}
… &c. for info, warn; also add overloads to log an exception …
public void log(Level level, String formatter, Object... args)
{
if (log.isEnabled(level)) {
/*
* Only now is the message constructed, and each "arg"
* evaluated by having its toString() method invoked.
*/
log.log(level, String.format(formatter, args));
}
}
}
class MyClass
{
private static final FormatLogger log =
new FormatLogger(Logger.getLogger(MyClass.class));
Connection connect(Widget w, Dongle d, Dongle alt)
throws ConnectionException
{
log.debug("Attempting connection of dongle %s to widget %s.", d, w);
Connection c;
try {
c = w.connect(d);
} catch(ConnectionException ex) {
log.warn("Connection failed; attempting alternate dongle %s.", d);
c = w.connect(alt);
}
log.debug("Connection succeeded: %s", c);
return c;
}
}
Now, none of the cascading toString() calls with their buffer allocations will occur unless they are necessary! This effectively eliminates the performance hit that led to the guard statements. One small penalty, in Java, would be auto-boxing of any primitive type arguments you pass to the logger.
The code doing the logging is arguably even cleaner than ever, since untidy string concatenation is gone. It can be even cleaner if the format strings are externalized (using a ResourceBundle), which could also assist in maintenance or localization of the software.
Further enhancements
Also note that, in Java, a MessageFormat object could be used in place of a "format" String, which gives you additional capabilities such as a choice format to handle cardinal numbers more neatly. Another alternative would be to implement your own formatting capability that invokes some interface that you define for "evaluation", rather than the basic toString() method.
In Python you pass the formatted values as parameters to the logging function. String formatting is only applied if logging is enabled. There's still the overhead of a function call, but that's minuscule compared to formatting.
log.info ("a = %s, b = %s", a, b)
You can do something like this for any language with variadic arguments (C/C++, C#/Java, etc).
This isn't really intended for when the arguments are difficult to retrieve, but for when formatting them to strings is expensive. For example, if your code already has a list of numbers in it, you might want to log that list for debugging. Executing mylist.toString() will take a while to no benefit, as the result will be thrown away. So you pass mylist as a parameter to the logging function, and let it handle string formatting. That way, formatting will only be performed if needed.
Since the OP's question specifically mentions Java, here's how the above can be used:
I must insist that the problem is not 'formatting' related, but 'argument evaluation' related (evaluation that can be very costly to do, just before calling a method which will do nothing)
The trick is to have objects that will not perform expensive computations until absolutely needed. This is easy in languages like Smalltalk or Python that support lambdas and closures, but is still doable in Java with a bit of imagination.
Say you have a function get_everything(). It will retrieve every object from your database into a list. You don't want to call this if the result will be discarded, obviously. So instead of using a call to that function directly, you define an inner class called LazyGetEverything:
public class MainClass {
private class LazyGetEverything {
#Override
public String toString() {
return getEverything().toString();
}
}
private Object getEverything() {
/* returns what you want to .toString() in the inner class */
}
public void logEverything() {
log.info(new LazyGetEverything());
}
}
In this code, the call to getEverything() is wrapped so that it won't actually be executed until it's needed. The logging function will execute toString() on its parameters only if debugging is enabled. That way, your code will suffer only the overhead of a function call instead of the full getEverything() call.
In languages supporting lambda expressions or code blocks as parameters, one solution for this would be to give just that to the logging method. That one could evaluate the configuration and only if needed actually call/execute the provided lambda/code block.
Did not try it yet, though.
Theoretically this is possible. I would not like to use it in production due to performance issues i expect with that heavy use of lamdas/code blocks for logging.
But as always: if in doubt, test it and measure the impact on cpu load and memory.
Thank you for all your answers! You guys rock :)
Now my feedback is not as straight-forward as yours:
Yes, for one project (as in 'one program deployed and running on its own on a single production platform'), I suppose you can go all technical on me:
dedicated 'Log Retriever' objects, which can be pass to a Logger wrapper only calling toString() is necessary
used in conjunction with a logging variadic function (or a plain Object[] array!)
and there you have it, as explained by #John Millikin and #erickson.
However, this issue forced us to think a little about 'Why exactly we were logging in the first place ?'
Our project is actually 30 different projects (5 to 10 people each) deployed on various production platforms, with asynchronous communication needs and central bus architecture.
The simple logging described in the question was fine for each project at the beginning (5 years ago), but since then, we has to step up. Enter the KPI.
Instead of asking to a logger to log anything, we ask to an automatically created object (called KPI) to register an event. It is a simple call (myKPI.I_am_signaling_myself_to_you()), and does not need to be conditional (which solves the 'artificial increase of cyclomatic complexity' issue).
That KPI object knows who calls it and since he runs from the beginning of the application, he is able to retrieve lots of data we were previously computing on the spot when we were logging.
Plus that KPI object can be monitored independently and compute/publish on demand its information on a single and separate publication bus.
That way, each client can ask for the information he actually wants (like, 'has my process begun, and if yes, since when ?'), instead of looking for the correct log file and grepping for a cryptic String...
Indeed, the question 'Why exactly we were logging in the first place ?' made us realize we were not logging just for the programmer and his unit or integration tests, but for a much broader community including some of the final clients themselves. Our 'reporting' mechanism had to be centralized, asynchronous, 24/7.
The specific of that KPI mechanism is way out of the scope of this question. Suffice it to say its proper calibration is by far, hands down, the single most complicated non-functional issue we are facing. It still does bring the system on its knee from time to time! Properly calibrated however, it is a life-saver.
Again, thank you for all the suggestions. We will consider them for some parts of our system when simple logging is still in place.
But the other point of this question was to illustrate to you a specific problem in a much larger and more complicated context.
Hope you liked it. I might ask a question on KPI (which, believe or not, is not in any question on SOF so far!) later next week.
I will leave this answer up for voting until next Tuesday, then I will select an answer (not this one obviously ;) )
Maybe this is too simple, but what about using the "extract method" refactoring around the guard clause? Your example code of this:
public void Example()
{
if(myLogger.isLoggable(Level.INFO))
myLogger.info("A String");
if(myLogger.isLoggable(Level.FINE))
myLogger.fine("A more complicated String");
// +1 for each test and log message
}
Becomes this:
public void Example()
{
_LogInfo();
_LogFine();
// +0 for each test and log message
}
private void _LogInfo()
{
if(!myLogger.isLoggable(Level.INFO))
return;
// Do your complex argument calculations/evaluations only when needed.
}
private void _LogFine(){ /* Ditto ... */ }
In C or C++ I'd use the preprocessor instead of the if statements for the conditional logging.
Pass the log level to the logger and let it decide whether or not to write the log statement:
//if(myLogger.isLoggable(Level.INFO) {myLogger.info("A String");
myLogger.info(Level.INFO,"A String");
UPDATE: Ah, I see that you want to conditionally create the log string without a conditional statement. Presumably at runtime rather than compile time.
I'll just say that the way we've solved this is to put the formatting code in the logger class so that the formatting only takes place if the level passes. Very similar to a built-in sprintf. For example:
myLogger.info(Level.INFO,"A String %d",some_number);
That should meet your criteria.
Conditional logging is evil. It adds unnecessary clutter to your code.
You should always send in the objects you have to the logger:
Logger logger = ...
logger.log(Level.DEBUG,"The foo is {0} and the bar is {1}",new Object[]{foo, bar});
and then have a java.util.logging.Formatter that uses MessageFormat to flatten foo and bar into the string to be output. It will only be called if the logger and handler will log at that level.
For added pleasure you could have some kind of expression language to be able to get fine control over how to format the logged objects (toString may not always be useful).
(source: scala-lang.org)
Scala has a annontation #elidable() that allows you to remove methods with a compiler flag.
With the scala REPL:
C:>scala
Welcome to Scala version 2.8.0.final (Java HotSpot(TM) 64-Bit Server VM, Java 1.
6.0_16).
Type in expressions to have them evaluated.
Type :help for more information.
scala> import scala.annotation.elidable
import scala.annotation.elidable
scala> import scala.annotation.elidable._
import scala.annotation.elidable._
scala> #elidable(FINE) def logDebug(arg :String) = println(arg)
logDebug: (arg: String)Unit
scala> logDebug("testing")
scala>
With elide-beloset
C:>scala -Xelide-below 0
Welcome to Scala version 2.8.0.final (Java HotSpot(TM) 64-Bit Server VM, Java 1.
6.0_16).
Type in expressions to have them evaluated.
Type :help for more information.
scala> import scala.annotation.elidable
import scala.annotation.elidable
scala> import scala.annotation.elidable._
import scala.annotation.elidable._
scala> #elidable(FINE) def logDebug(arg :String) = println(arg)
logDebug: (arg: String)Unit
scala> logDebug("testing")
testing
scala>
See also Scala assert definition
As much as I hate macros in C/C++, at work we have #defines for the if part, which if false ignores (does not evaluate) the following expressions, but if true returns a stream into which stuff can be piped using the '<<' operator.
Like this:
LOGGER(LEVEL_INFO) << "A String";
I assume this would eliminate the extra 'complexity' that your tool sees, and also eliminates any calculating of the string, or any expressions to be logged if the level was not reached.
Here is an elegant solution using ternary expression
logger.info(logger.isInfoEnabled() ? "Log Statement goes here..." : null);
Consider a logging util function ...
void debugUtil(String s, Object… args) {
if (LOG.isDebugEnabled())
LOG.debug(s, args);
}
);
Then make the call with a "closure" round the expensive evaluation that you want to avoid.
debugUtil(“We got a %s”, new Object() {
#Override String toString() {
// only evaluated if the debug statement is executed
return expensiveCallToGetSomeValue().toString;
}
}
);

Accessing the Body of a Function with Lua

I'm going back to the basics here but in Lua, you can define a table like so:
myTable = {}
myTable [1] = 12
Printing the table reference itself brings back a pointer to it. To access its elements you need to specify an index (i.e. exactly like you would an array)
print(myTable ) --prints pointer
print(myTable[1]) --prints 12
Now functions are a different story. You can define and print a function like so:
myFunc = function() local x = 14 end --Defined function
print(myFunc) --Printed pointer to function
Is there a way to access the body of a defined function. I am trying to put together a small code visualizer and would like to 'seed' a given function with special functions/variables to allow a visualizer to 'hook' itself into the code, I would need to be able to redefine the function either from a variable or a string.
There is no way to get access to body source code of given function in plain Lua. Source code is thrown away after compilation to byte-code.
Note BTW that function may be defined in run-time with loadstring-like facility.
Partial solutions are possible — depending on what you actually want to achieve.
You may get source code position from the debug library — if debug library is enabled and debug symbols are not stripped from the bytecode. After that you may load actual source file and extract code from there.
You may decorate functions you're interested in manually with required metadata. Note that functions in Lua are valid table keys, so you may create a function-to-metadata table. You would want to make this table weak-keyed, so it would not prevent functions from being collected by GC.
If you would need a solution for analyzing Lua code, take a look at Metalua.
Check out Lua Introspective Facilities in the debugging library.
The main introspective function in the
debug library is the debug.getinfo
function. Its first parameter may be a
function or a stack level. When you
call debug.getinfo(foo) for some
function foo, you get a table with
some data about that function. The
table may have the following fields:
The field you would want is func I think.
Using the debug library is your only bet. Using that, you can get either the string (if the function is defined in a chunk that was loaded with 'loadstring') or the name of the file in which the function was defined; together with the line-numbers at which the function definition starts and ends. See the documentation.
Here at my current job we have patched Lua so that it even gives you the column numbers for the start and end of the function, so you can get the function source using that. The patch is not very difficult to reproduce, but I don't think I'll be allowed to post it here :-(
You could accomplish this by creating an environment for each function (see setfenv) and using global (versus local) variables. Variables created in the function would then appear in the environment table after the function is executed.
env = {}
myFunc = function() x = 14 end
setfenv(myFunc, env)
myFunc()
print(myFunc) -- prints pointer
print(env.x) -- prints 14
Alternatively, you could make use of the Debug Library:
> myFunc = function() local x = 14 ; debug.debug() end
> myFunc()
> lua_debug> _, x = debug.getlocal(3, 1)
> lua_debug> print(x) -- prints 14
It would probably be more useful to you to retrieve the local variables with a hook function instead of explicitly entering debug mode (i.e. adding the debug.debug() call)
There is also a Debug Interface in the Lua C API.