Many times I saw logging of errors like these:
System.out.println("Method aMethod with parameters a:"+a+" b: "+b);
print("Error in line 88");
so.. What are the best practices to log an error?
EDIT:
This is java but could be C/C++, basic, etc.
Logging directly to the console is horrendous and frankly, the mark of an inexperienced developer. The only reason to do this sort of thing is 1) he or she is unaware of other approaches, and/or 2) the developer has not thought one bit about what will happen when his/her code is deployed to a production site, and how the application will be maintained at that point. Dealing with an application that is logging 1GB/day or more of completely unneeded debug logging is maddening.
The generally accepted best practice is to use a Logging framework that has concepts of:
Different log objects - Different classes/modules/etc can log to different loggers, so you can choose to apply different log configurations to different portions of the application.
Different log levels - so you can tweak the logging configuration to only log errors in production, to log all sorts of debug and trace info in a development environment, etc.
Different log outputs - the framework should allow you to configure where the log output is sent to without requiring any changes in the codebase. Some examples of different places you might want to send log output to are files, files that roll over based on date/size, databases, email, remoting sinks, etc.
The log framework should never never never throw any Exceptions or errors from the logging code. Your application should not fail to load or fail to start because the log framework cannot create it's log file or obtain a lock on the file (unless this is a critical requirement, maybe for legal reasons, for your app).
The eventual log framework you will use will of course depend on your platform. Some common options:
Java:
Apache Commons Logging
log4j
logback
Built-in java.util.logging
.NET:
log4net
C++:
log4cxx
Apache Commons Logging is not intended for applications general logging. It's intended to be used by libraries or APIs that don't want to force a logging implementation on the API's user.
There are also classloading issues with Commons Logging.
Pick one of the [many] logging api's, the most widely used probably being log4j or the Java Logging API.
If you want implementation independence, you might want to consider SLF4J, by the original author of log4j.
Having picked an implementation, then use the logging levels/severity within that implementation consistently, so that searching/filtering logs is easier.
The easiest way to log errors in a consistent format is to use a logging framework such as Log4j (assuming you're using Java). It is useful to include a logging section in your code standards to make sure all developers know what needs to be logged. The nice thing about most logging frameworks is they have different logging levels so you can control how verbose the logging is between development, test, and production.
A best practice is to use the java.util.logging framework
Then you can log messages in either of these formats
log.warning("..");
log.fine("..");
log.finer("..");
log.finest("..");
Or
log.log(Level.WARNING, "blah blah blah", e);
Then you can use a logging.properties (example below) to switch between levels of logging, and do all sorts of clever stuff like logging to files, with rotation etc.
handlers = java.util.logging.ConsoleHandler
.level = WARNING
java.util.logging.ConsoleHandler.level = ALL
com.example.blah = FINE
com.example.testcomponents = FINEST
Frameworks like log4j and others should be avoided in my opinion, Java has everything you need already.
EDIT
This can apply as a general practice for any programming language. Being able to control all levels of logging from a single property file is often very important in enterprise applications.
Some suggested best-practices
Use a logging framework. This will allow you to:
Easily change the destination of your log messages
Filter log messages based on severity
Support internationalised log messages
If you are using java, then slf4j is now preferred to Jakarta commons logging as the logging facade.
As stated slf4j is a facade, and you have to then pick an underlying implementation. Either log4j, java.util.logging, or 'simple'.
Follow your framework's advice to ensuring expensive logging operations are not needlessly carried out
The apache common logging API as mentioned above is a great resource. Referring back to java, there is also a standard error output stream (System.err).
Directly from the Java API:
This stream is already open and ready
to accept output data.
Typically this stream corresponds to
display output or another output
destination specified by the host
environment or user. By convention,
this output stream is used to display
error messages or other information
that should come to the immediate
attention of a user even if the
principal output stream, the value of
the variable out, has been redirected
to a file or other destination that is
typically not continuously monitored.
Aside from technical considerations from other answers it is advisable to log a meaningful message and perhaps some steps to avoid the error in the future. Depending on the errors, of course.
You could get more out of a I/O-Error when the message states something like "Could not read from file X, you don't have the appropriate permission."
See more examples on SO or search the web.
There really is no best practice for logging an error. It basically just needs to follow a consistent pattern (within the software/company/etc) that provides enough information to track the problem down. For Example, you might want to keep track of the time, the method, parameters, calling method, etc.
So long as you dont just print "Error in "
Related
We have a number of loosely coupled apps, some in PHP and some in Python.
It would be beneficial to have some centralized place where they could get both global and app-specific configuration information.
Something like, for Python:
conf=config_server.get_params(url='http://config_server/get/My_app/all', auth=my_auth_data)
and then ideally use parameters as potentially nested attributes, eg. conf.APP.URL, conf.GLOBAL.MAX_SALES
I was considering making my own config server app, but wasn't sure, what would be the pros and cons of such approach vs. eg. storing config in centralized database or any other multiple-site accessible mode.
Also, if I perhaps was missing some readily available tool with good support, which could do this (I had a look at Puppet and Ansible, but they seemed to be very evolved tools for doing so much more than this. I also looked at software recommnedation SE for this, but they have a number of such question unanswered already).
I think it would be a good idea for your configuration mechanism not to be hard-coded to obtain configuration data via a particular technology (such as file, web server or database), but rather be able to obtain configuration data from any of several different technologies. I illustrate this with the following pseudo-code examples:
cfg = getConfig("file.cfg"); # from a file
cfg = getConfig("file#file.cfg"); # also from a file
cfg = getConfig("url#http://config_server/file.cfg"); # from the specified URL
cfg = getConfig("exec#getConfigFromDB.py"); # from stdout of command
The parameter passed to getConfig() might be obtained from, say, a command-line option. The "exec#..." format is a flexible mechanism, but carries the potential danger of somebody specifying a malicious command to execute, for example, "exec#rm -rf /".
This approach means you can experiment with whatever you consider to be an ideal source-of-configuration-data technology and later, if you discover that technology to be inappropriate, it will be trivial to discard it and use a different source-of-configuration-data technology instead. Indeed, the decision for which source-of-configuration-data technology to use might vary from one use case/user to another.
I developed a C++ and Java configuration file parser (sorry, no Python or PHP implementations) called Config4*. If you look at chapters 2 (overview of syntax) and 3 (overview of API) of the Config4* Getting Started Guide, you will notice that it supports the kind of flexible approach I discuss in this answer (the "url#... format is not supported, but "exec#curl -sS ..." provides the same functionality). 99 percent of the time, I end up using configuration files, but I find it comforting to know that my applications can trivially switch to using a different-source-of-configuration-data technology whenever the need might arise.
When there is some internal exception in a Webflux application, why do I want/need to write code to handle these exceptions? I understand handling issues and returning appropriate ServerResponse bodies when the service client incorrectly invokes a service, or when a non-error-condition (i.e., query returns empty cursor, etc.) occurs.
But, other than generating debug information into a logfile, is there anything to be gained by rolling-your-own exception handling components? This approach makes "more sense" to me in a monolithic application, where one is trying to avoid a scenario where the app "just dies".
But, for a service implementation, especially if there's some incentive not to expose too much about the internal implementation to a client, why wouldn't Spring's default error/exception handling (and "500 Internal Server Error" response/message) be sufficient.
So, after some time and thought (and little, but still helpful-and-appreciated feedback), I guess it boils down to:
(a) - It provides a localized context to "do things", like logging information about the exception/error condition, or categorizing the severity of the exception within-the-context of a server-client interaction.
(b) - It provides a localized context to hide/expose information from a client, based on the nature of the exception/error condition and whether the server is deployed in a production or test environment.
(c) - Being localized, it makes maintenance/modification a bit easier, as the handling of exceptions/errors is not scattered throughout the code.
(a) and (c) are enough to make me believe it's worth the effort.
Back in my (limited) java programming days, I remember this nice feature where if I tried to make a call that could throw an exception, java would require me to handle that exception or pass it off to something that could.
Anyways, I am writing a piece of powershell code that messes around with objects in Active Directory, so I want to be very, very careful. I've gotten occasional remote timeout errors, and that is leading me toward the more general question:
"How can I know ahead of time which of these cmdlets can throw exceptions indicating dangerous conditions, and what is the list of those possible exceptions?"
I am wondering if the list of exceptions, per cmdlet, is way too long to address all possibilities. I also don't want to just write a generic exception handler, as powershell seems to do OK in the general sense of error handling.
What's the best way to determine, per cmdlet, the list of all exceptions that can occur? Is this even possible / feasible?
Thanks!
Heh, I think you started out on the wrong foot there. The jury is very much out on whether Java's checked exceptions are a nice idea.
That said, what you ask is very difficult to answer. In Java, it's clear to the compiler through static analysis what methods throw (or at least what they declare they will throw) what exceptions; this is a closed system existing solely in the process space of the compiler. In the real world of distributed heterogeneous systems, there is no universal checked exception framework. PowerShell cmdlets exist in the domain of a .NET appdomain in a win32 process, but they talk to backing systems on foreign servers using obtuse protocols like Active Directory which are a world apart both in implementation and general conception. Exceptional conditions may "flow" from one domain to the next, but they get warped, wrapped and mushed in all directions before they bubble up to you, the poor user at the console. In short, the answer is no. The general purpose Cmdlets (get-item, get-childitem) do not know about the underlying provider system's propensity to cause errors, and nor can they reliably know this.
However, if you have a dedicated module for Active Directory (like ActiveDirectory module from Microsoft, or Quest's QAD module) then it's possible they have listed the exceptions that their cmdlets will surface in the case of exceptional conditions in the backing system. This help would be found - most likely - in the module (or snapin) help files, or on a per-cmdlet basis. Try running the following command:
ps> get-help do-something -full | more
This will show the full invocation syntax along with any notes the developers have felt good enough to bless you with. Pay particular attention to the footer; it's here you'll usually find a more general help topic like "about_thesecmdlets" that you may view with: get-help about_thesecmdlets
Hope this helps.
This may be a stupid question, as most of my programming consists of one-man scientific computing research prototypes and developing relatively low-level libraries. I've never programmed in the large in an enterprise environment before. I've always wondered, what are the main things that logging libraries make substantially easier than just using good old fashioned print statements or file output, simple programming logic and a few global variables to determine how verbosely things get logged? How do you know when a few print statements or some basic file output ain't gonna cut it and you need a real logging library?
Logging helps debug problems especially when you move to production and problems occur on people's machines you can't control. Best laid plans never survive contact with the enemy, and logging helps you track how that battle went when faced with real world data.
Off the shel logging libraries are easy to plug in and play in less than 5 minutes.
Log libraries allow for various levels of logging per statement (FATAL, ERROR, WARN, INFO, DEBUG, etc).
And you can turn up or down logging to get more of less information at runtime.
Highly threaded systems help sort out what thread was doing what. Log libraries can log information about threads, timestamps, that ordinary print statements can't.
Most allow you to turn on only portions of the logging to get more detail. So one system can log debug information, and another can log only fatal errors.
Logging libraries allow you to configure logging through an external file so it's easy to turn on or off in production without having to recompile, deploy, etc.
3rd party libraries usually log so you can control them just like the other portions of your system.
Most libraries allow you to log portions or all of your statements to one or many files based on criteria. So you can log to both the console AND a log file.
Log libraries allow you to rotate logs so it will keep several log files based on many different criteria. Say after the log gets 20MB rotate to another file, and keep 10 log files around so that log data is always 100MB.
Some log statements can be compiled in or out (language dependent).
Log libraries can be extended to add new features.
You'll want to start using a logging libraries when you start wanting some of these features. If you find yourself changing your program to get some of these features you might want to look into a good log library. They are easy to learn, setup, and use and ubiquitous.
There are used in environments where the requirements for logging may change, but the cost of changing or deploying a new executable are high. (Even when you have the source code, adding a one line logging change to a program can be infeasible because of internal bureaucracy.)
The logging libraries provide a framework that the program will use to emit a wide variety of messages. These can be described by source (e.g. the logger object it is first sent to, often corresponding to the class the event has occurred in), severity, etc.
During runtime the actual delivery of the messaages is controlled using an "easily" edited config file. For normal situations most messages may be obscured altogether. But if the situation changes, it is a simpler fix to enable more messages, without needing to deploy a new program.
The above describes the ideal logging framework as I understand the intention; in practice I have used them in Java and Python and in neither case have I found them worth the added complexity. :-(
They're for logging things.
Or more seriously, for saving you having to write it yourself, giving you flexible options on where logs are store (database, event log, text file, CSV, sent to a remote web service, delivered by pixies on a velvet cushion) and on what is logged at runtime, rather than having to redefine a global variable and then recompile.
If you're only writing for yourself then it's unlikely you need one, and it may introduce an external dependency you don't want, but once your libraries start to be used by others then having a logging framework in place may well help your users, and you, track down problems.
I know that a logging library is useful when I have more than one subsystem with "verbose logging," but where I only want to see that verbose data from one of them.
Certainly this can be achieved by having a global log level per subsystem, but for me it's easier to use a "system" of some sort for that.
I generally have a 2D logging environment too; "Info/Warning/Error" (etc) on one axis and "AI/UI/Simulation/Networking" (etc) on the other. With this I can specify the logging level that I care about seeing for each subsystem easily. It's not actually that complicated once it's in place, indeed it's a lot cleaner than having if my_logging_level == DEBUG then print("An error occurred"); Plus, the logging system can stuff file/line info into the messages, and then getting totally fancy you can redirect them to multiple targets pretty easily (file, TTY, debugger, network socket...).
I've recently started to play with Ruby on Rails which favours convention over configuration and relies on sensible defaults to tie various aspects of the application together.
I was thinking that it might be useful if this concept of sensible default configuration was used in general configation for various frameworks then it might save some development headache.
For example, in a .net app I usually want to log an exception in the windows event log using enterprise library exception handling block but if I don't explicity state the behaviour I want in a config file then EL will complain. I think that instead, if it can't find custom configuration then it should revert to a sensible default configuration, like logging my exception in the event log.
Would this be a good or bad concept for frameworks to adopt for their configuration?
I work a lot with a framework that does this exact thing. My trouble with this way of working is that:
the framework grew to having an excessive amount of configuration keys that are actually never used/set in a configuration file.
behavior of the software becomes implicit sometimes, I want to explicitly set the system to behave a certain way instead of having it fallback on some other code path due to a 'default'.
a missed typo in configuration key may result in a very long diagnostic session before figuring out what is going on.
When forgetting to set a configuration value I rather have the software tell me, instead of assuming some form of behavior that I might not at all be after.
I'd prefer a 'template' configuration file in which I change what I want and have the unchanged settings serve as the default.
Figuring which out which convention the software picked when debugging can be a lot of time wasted also.