Force boost logging core to shutdown? - boost-log

Is there a way to force destruction of the boost logging core singleton? It can accessed via:
boost::log::core::get();
Which returns a shared pointer to the logging core. However, I need to shutdown / de-allocate it explicitly before my application closes other resources / detaches dll etc...
Is this possible?

No, the singleton only gets destroyed on application termination. But depending on what you're trying to achieve, you can make it release certain resources. For instance, by calling remove_all_sinks you can make it release all sinks, which will cause their destruction, unless they are referenced to from elsewhere.

Related

Inherent issue with custom memory manager and RTLD_DEEPBIND?

I need help regarding an issue for which I created bug 25533 a while ago.
The issue is mainly because of complex interaction of RTLD_DEEPBIND, libc and custom memory manager.
Let me explain issue first
In my application, I use a third party application which makes use of dlopen with RTLD_DEEPBIND flag
In my application, this third party application is linked statically along with Google’s memory manager (gperftools https://github.com/gperftools/gperftools).
When third party application calls dlopen with RTLD_DEEPBIND flag, strdup (along with malloc) gets picked from glibc.
free definition is still communing from custom memory manager and it’s a problem as malloc’ed memory was allocated by libc malloc while free call is directed to custom memory manager.
I asked third party application vendor to fix it at their end (dlopen with RTLD_DEEPBIND), but they expressed their inability to do it as it has side effect.
I filed ticket against tcmalloc as well but it also hit dead end. As a workaround, I made use of malloc hooks.
But, I consider malloc hooks as temporary workaround and I’m looking for proper solution to this problem and IMO, proper fix should be in glibc which should be agnostic to memory manager
that any application chooses. I believe jemalloc (Facebook) also has same issue and they also have used malloc hooks to work around this problem.
Through this email, I’m looking for suggestions to fix this issue properly.
I have created a small test case which is not exactly what is there in my application but closely resembles what I see. It’s dependent on TCMALLOC_STATIC_LIB variable which should be path to static library (.a) for custom memory manager.

Configuring Weld's threadpool in Wildfly 19

How can I configure the size of the executor service the Weld subsystem of Wildfly uses to execute asynchronous event observer methods? Specifically I want to increase the size of the thread pool.
The Weld documentation has some config parameters but points out that those can be ignored by integrators and that Wildfly is one that does. The Wildfly documentation on the other hand contains configuration option for nearly every subsystem except the Weld subsystem.
I'm using Wildfly 19.
The actual executor service that WFLY uses for Weld purposes is WeldExecutorServices and even more precisely for async obsever notification, this method returns the executor.
With a little bit of digging I could find that this is set in WeldSubsystemAdd, here. So it has some defaults but it is pulling the config from somewhere before using the default.
Therefore, you should be able to adjust this by configuring the given WildFly subsystem, in this case Weld.
I have found out that documentation mentions certain options for Weld subsystem, one of which is thread-pool-size. See https://docs.wildfly.org/19/wildscribe/subsystem/weld/index.html
I don't know exactly how to pass in these options to WFLY because it has been a long time since I last used it. However, it is some generic way in which you can pass in options for any subsystem. Once you figure that out, you should be good to go.

Castle Windsor when is transient with disposable released? Burden

We're using Castle Windsor 2.1.0.6655.
I'm wanting to use transient lifecycle for my resolved objects, but I'm wanting to check how this version of Castle deals with transients that have dependencies. If I use my immediate window (visual studio), I can see the effects of resolving, disposing, and finally realeasing, all the time checking whether the resolved object is released.
eg.
resolved = container.Resolve(Id);
container.Kernal.ReleasePolicy.HasTrack(resolved)
= true
resolved.Dispose()
container.Kernal.ReleasePolicy.HasTrack(resolved)
= true
container.release(resolved)
container.Kernal.ReleasePolicy.HasTrack(resolved)
= false
My concern is that these objects are continuing to be tracked between requests, as they are never released, meaning memory usage continues to rise.
I've read that Component Burden is related to this issue, but I haven't been able to find out exactly what this is in Castle 2.0 and greater.
The difficulty in 'releasing' is that the resolved objects are in fact part of services, their usage being to provide ORM functions and mappings. I'm not sure that referencing the container to release is correct in these instances.
I'm wondering whether there is a way for me to see how many objects the container is referencing at a given point, without having to use memory profilers, as we don't have this available.
I thought I could maybe use the following:
container.Kernel.GetHandlers()
with the type I'm looking for, to see if tracked occurrences are increasing?
Vesion 2.1 will celebrate its 4th birthday very soon. I strongly recommend you to upgrade to version 3.1.
Not only because v2.1 is no longer supported and v3.1 is much newer, with many bugfixes, but also it has some major improvements in the way it does tracking.
Also in v3.1 you will be able to enable a performance counter, that will report to you, in real time, the number of instances being tracked by the release policy.
Addressing the particular concern you're referring to, that sounds like an old threading bug that was fixed somewhere along the way. One more reason to upgrade.
windsor has to be used with R(egister)R(esolve)R(elease) pattern.
by default(you definitely should stick with that...) all components are tracked/owned by the container... that's the windsor beauty!
Until you (or the container itself) calls Release the instance will be hold in memory, no matter if you call the Dispose directly(as per you sample).
Said so, components registered as Transient should be called w/ composition root only in other word as first object of the dependency graph or through a factory(late dependency).
Of course keep in mind that using a factory within dependency graph you may need to implement RRR pattern expliclty.

What are logging libraries for?

This may be a stupid question, as most of my programming consists of one-man scientific computing research prototypes and developing relatively low-level libraries. I've never programmed in the large in an enterprise environment before. I've always wondered, what are the main things that logging libraries make substantially easier than just using good old fashioned print statements or file output, simple programming logic and a few global variables to determine how verbosely things get logged? How do you know when a few print statements or some basic file output ain't gonna cut it and you need a real logging library?
Logging helps debug problems especially when you move to production and problems occur on people's machines you can't control. Best laid plans never survive contact with the enemy, and logging helps you track how that battle went when faced with real world data.
Off the shel logging libraries are easy to plug in and play in less than 5 minutes.
Log libraries allow for various levels of logging per statement (FATAL, ERROR, WARN, INFO, DEBUG, etc).
And you can turn up or down logging to get more of less information at runtime.
Highly threaded systems help sort out what thread was doing what. Log libraries can log information about threads, timestamps, that ordinary print statements can't.
Most allow you to turn on only portions of the logging to get more detail. So one system can log debug information, and another can log only fatal errors.
Logging libraries allow you to configure logging through an external file so it's easy to turn on or off in production without having to recompile, deploy, etc.
3rd party libraries usually log so you can control them just like the other portions of your system.
Most libraries allow you to log portions or all of your statements to one or many files based on criteria. So you can log to both the console AND a log file.
Log libraries allow you to rotate logs so it will keep several log files based on many different criteria. Say after the log gets 20MB rotate to another file, and keep 10 log files around so that log data is always 100MB.
Some log statements can be compiled in or out (language dependent).
Log libraries can be extended to add new features.
You'll want to start using a logging libraries when you start wanting some of these features. If you find yourself changing your program to get some of these features you might want to look into a good log library. They are easy to learn, setup, and use and ubiquitous.
There are used in environments where the requirements for logging may change, but the cost of changing or deploying a new executable are high. (Even when you have the source code, adding a one line logging change to a program can be infeasible because of internal bureaucracy.)
The logging libraries provide a framework that the program will use to emit a wide variety of messages. These can be described by source (e.g. the logger object it is first sent to, often corresponding to the class the event has occurred in), severity, etc.
During runtime the actual delivery of the messaages is controlled using an "easily" edited config file. For normal situations most messages may be obscured altogether. But if the situation changes, it is a simpler fix to enable more messages, without needing to deploy a new program.
The above describes the ideal logging framework as I understand the intention; in practice I have used them in Java and Python and in neither case have I found them worth the added complexity. :-(
They're for logging things.
Or more seriously, for saving you having to write it yourself, giving you flexible options on where logs are store (database, event log, text file, CSV, sent to a remote web service, delivered by pixies on a velvet cushion) and on what is logged at runtime, rather than having to redefine a global variable and then recompile.
If you're only writing for yourself then it's unlikely you need one, and it may introduce an external dependency you don't want, but once your libraries start to be used by others then having a logging framework in place may well help your users, and you, track down problems.
I know that a logging library is useful when I have more than one subsystem with "verbose logging," but where I only want to see that verbose data from one of them.
Certainly this can be achieved by having a global log level per subsystem, but for me it's easier to use a "system" of some sort for that.
I generally have a 2D logging environment too; "Info/Warning/Error" (etc) on one axis and "AI/UI/Simulation/Networking" (etc) on the other. With this I can specify the logging level that I care about seeing for each subsystem easily. It's not actually that complicated once it's in place, indeed it's a lot cleaner than having if my_logging_level == DEBUG then print("An error occurred"); Plus, the logging system can stuff file/line info into the messages, and then getting totally fancy you can redirect them to multiple targets pretty easily (file, TTY, debugger, network socket...).

Dealing with "java.lang.OutOfMemoryError: PermGen space" error

Recently I ran into this error in my web application:
java.lang.OutOfMemoryError: PermGen space
It's a typical Hibernate/JPA + IceFaces/JSF application running on Tomcat 6 and JDK 1.6.
Apparently this can occur after redeploying an application a few times.
What causes it and what can be done to avoid it?
How do I fix the problem?
The solution was to add these flags to JVM command line when Tomcat is started:
-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled
You can do that by shutting down the tomcat service, then going into the Tomcat/bin directory and running tomcat6w.exe. Under the "Java" tab, add the arguments to the "Java Options" box. Click "OK" and then restart the service.
If you get an error the specified service does not exist as an installed service you should run:
tomcat6w //ES//servicename
where servicename is the name of the server as viewed in services.msc
Source: orx's comment on Eric's Agile Answers.
You better try -XX:MaxPermSize=128M rather than -XX:MaxPermGen=128M.
I can not tell the precise use of this memory pool, but it have to do with the number of classes loaded into the JVM. (Thus enabling class unloading for tomcat can resolve the problem.) If your applications generates and compiles classes on the run it is more likely to need a memory pool bigger than the default.
App server PermGen errors that happen after multiple deployments are most likely caused by references held by the container into your old apps' classloaders. For example, using a custom log level class will cause references to be held by the app server's classloader. You can detect these inter-classloader leaks by using modern (JDK6+) JVM analysis tools such as jmap and jhat to look at which classes continue to be held in your app, and redesigning or eliminating their use. Usual suspects are databases, loggers, and other base-framework-level libraries.
See Classloader leaks: the dreaded "java.lang.OutOfMemoryError: PermGen space" exception, and especially its followup post.
Common mistakes people make is thinking that heap space and permgen space are same, which is not at all true. You could have lot of space remaining in the heap but still can run out of memory in permgen.
Common causes of OutofMemory in PermGen is ClassLoader. Whenever a class is loaded into JVM, all its meta data, along with Classloader, is kept on PermGen area and they will be garbage collected when the Classloader which loaded them is ready for garbage collection. In Case Classloader has a memory leak than all classes loaded by it will remain in memory and cause permGen outofmemory once you repeat it a couple of times. The classical example is Java.lang.OutOfMemoryError:PermGen Space in Tomcat.
Now there are two ways to solve this:
1. Find the cause of Memory Leak or if there is any memory leak.
2. Increase size of PermGen Space by using JVM param -XX:MaxPermSize and -XX:PermSize.
You can also check 2 Solution of Java.lang.OutOfMemoryError in Java for more details.
Use the command line parameter -XX:MaxPermSize=128m for a Sun JVM (obviously substituting 128 for whatever size you need).
Try -XX:MaxPermSize=256m and if it persists, try -XX:MaxPermSize=512m
I added -XX: MaxPermSize = 128m (you can experiment which works best) to VM Arguments as I'm using eclipse ide. In most of JVM, default PermSize is around 64MB which runs out of memory if there are too many classes or huge number of Strings in the project.
For eclipse, it is also described at answer.
STEP 1 : Double Click on the tomcat server at Servers Tab
STEP 2 : Open launch Conf and add -XX: MaxPermSize = 128m to the end of existing VM arguements.
I've been butting my head against this problem while deploying and undeploying a complex web application too, and thought I'd add an explanation and my solution.
When I deploy an application on Apache Tomcat, a new ClassLoader is created for that app. The ClassLoader is then used to load all the application's classes, and on undeploy, everything's supposed to go away nicely. However, in reality it's not quite as simple.
One or more of the classes created during the web application's life holds a static reference which, somewhere along the line, references the ClassLoader. As the reference is originally static, no amount of garbage collecting will clean this reference up - the ClassLoader, and all the classes it's loaded, are here to stay.
And after a couple of redeploys, we encounter the OutOfMemoryError.
Now this has become a fairly serious problem. I could make sure that Tomcat is restarted after each redeploy, but that takes down the entire server, rather than just the application being redeployed, which is often not feasible.
So instead I've put together a solution in code, which works on Apache Tomcat 6.0. I've not tested on any other application servers, and must stress that this is very likely not to work without modification on any other application server.
I'd also like to say that personally I hate this code, and that nobody should be using this as a "quick fix" if the existing code can be changed to use proper shutdown and cleanup methods. The only time this should be used is if there's an external library your code is dependent on (In my case, it was a RADIUS client) that doesn't provide a means to clean up its own static references.
Anyway, on with the code. This should be called at the point where the application is undeploying - such as a servlet's destroy method or (the better approach) a ServletContextListener's contextDestroyed method.
//Get a list of all classes loaded by the current webapp classloader
WebappClassLoader classLoader = (WebappClassLoader) getClass().getClassLoader();
Field classLoaderClassesField = null;
Class clazz = WebappClassLoader.class;
while (classLoaderClassesField == null && clazz != null) {
try {
classLoaderClassesField = clazz.getDeclaredField("classes");
} catch (Exception exception) {
//do nothing
}
clazz = clazz.getSuperclass();
}
classLoaderClassesField.setAccessible(true);
List classes = new ArrayList((Vector)classLoaderClassesField.get(classLoader));
for (Object o : classes) {
Class c = (Class)o;
//Make sure you identify only the packages that are holding references to the classloader.
//Allowing this code to clear all static references will result in all sorts
//of horrible things (like java segfaulting).
if (c.getName().startsWith("com.whatever")) {
//Kill any static references within all these classes.
for (Field f : c.getDeclaredFields()) {
if (Modifier.isStatic(f.getModifiers())
&& !Modifier.isFinal(f.getModifiers())
&& !f.getType().isPrimitive()) {
try {
f.setAccessible(true);
f.set(null, null);
} catch (Exception exception) {
//Log the exception
}
}
}
}
}
classes.clear();
The java.lang.OutOfMemoryError: PermGen space message indicates that the Permanent Generation’s area in memory is exhausted.
Any Java applications is allowed to use a limited amount of memory. The exact amount of memory your particular application can use is specified during application startup.
Java memory is separated into different regions which can be seen in the following image:
Metaspace: A new memory space is born
The JDK 8 HotSpot JVM is now using native memory for the representation of class metadata and is called Metaspace; similar to the Oracle JRockit and IBM JVM's.
The good news is that it means no more java.lang.OutOfMemoryError: PermGen space problems and no need for you to tune and monitor this memory space anymore using Java_8_Download or higher.
Alternatively, you can switch to JRockit which handling permgen differently then sun's jvm. It generally has better performance as well.
http://www.oracle.com/technetwork/middleware/jrockit/overview/index.html
1) Increasing the PermGen Memory Size
The first thing one can do is to make the size of the permanent generation heap space bigger. This cannot be done with the usual –Xms(set initial heap size) and –Xmx(set maximum heap size) JVM arguments, since as mentioned, the permanent generation heap space is entirely separate from the regular Java Heap space,
and these arguments set the space for this regular Java heap space. However, there are similar arguments which can be used(at least with the Sun/OpenJDK jvms) to make the size of the permanent generation heap bigger:
-XX:MaxPermSize=128m
Default is 64m.
2) Enable Sweeping
Another way to take care of that for good is to allow classes to be unloaded so your PermGen never runs out:
-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled
Stuff like that worked magic for me in the past. One thing though, there’s a significant performance trade off in using those, since permgen sweeps will make like an extra 2 requests for every request you make or something along those lines. You’ll need to balance your use with the tradeoffs.
You can find the details of this error.
http://faisalbhagat.blogspot.com/2014/09/java-outofmemoryerror-permgen.html
I had the problem we are talking about here, my scenario is eclipse-helios + tomcat + jsf and what you were doing is making a deploy a simple application to tomcat. I was showing the same problem here, solved it as follows.
In eclipse go to servers tab double click on the registered server in my case tomcat 7.0, it opens my file server General registration information. On the section "General Information" click on the link "Open launch configuration" , this opens the execution of server options in the Arguments tab in VM arguments added in the end these two entries
-XX: MaxPermSize = 512m
-XX: PermSize = 512m
and ready.
The simplest answer these days is to use Java 8.
It no longer reserves memory exclusively for PermGen space, allowing the PermGen memory to co-mingle with the regular memory pool.
Keep in mind that you will have to remove all non-standard -XXPermGen...=... JVM startup parameters if you don't want Java 8 to complain that they don't do anything.
Open tomcat7w from Tomcat's bin directory or type Monitor Tomcat in start menu
(a tabbed window opens with various service information).
In the Java Options text area append this line:
-XX:MaxPermSize=128m
Set Initial Memory Pool to 1024 (optional).
Set Maximum Memory Pool to 1024 (optional).
Click Ok.
Restart the Tomcat service.
Perm gen space error occurs due to the use of large space rather then jvm provided space to executed the code.
The best solution for this problem in UNIX operating systems is to change some configuration on the bash file. The following steps solve the problem.
Run command gedit .bashrc on terminal.
Create JAVA_OTPS variable with following value:
export JAVA_OPTS="-XX:PermSize=256m -XX:MaxPermSize=512m"
Save the bash file. Run command exec bash on the terminal. Restart the server.
I hope this approach will work on your problem. If you use a Java version lower than 8 this issue occurs sometimes. But if you use Java 8 the problem never occurs.
Increasing Permanent Generation size or tweaking GC parameters will NOT help if you have a real memory leak. If your application or some 3rd party library it uses, leaks class loaders the only real and permanent solution is to find this leak and fix it. There are number of tools that can help you, one of the recent is Plumbr, which has just released a new version with the required capabilities.
Also if you are using log4j in your webapp, check this paragraph in log4j documentation.
It seems that if you are using PropertyConfigurator.configureAndWatch("log4j.properties"), you cause memory leaks when you undeploy your webapp.
I have a combination of Hibernate+Eclipse RCP, tried using -XX:MaxPermSize=512m and -XX:PermSize=512m and it seems to be working for me.
Set -XX:PermSize=64m -XX:MaxPermSize=128m. Later on you may also try increasing MaxPermSize. Hope it'll work. The same works for me. Setting only MaxPermSize didn't worked for me.
I tried several answers and the only thing what finally did the job was this configuration for the compiler plugin in the pom:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<fork>true</fork>
<meminitial>128m</meminitial>
<maxmem>512m</maxmem>
<source>1.6</source>
<target>1.6</target>
<!-- prevent PermGen space out of memory exception -->
<!-- <argLine>-Xmx512m -XX:MaxPermSize=512m</argLine> -->
</configuration>
</plugin>
hope this one helps.
jrockit resolved this for me as well; however, I noticed that the servlet restart times were much worse, so while it was better in production, it was kind of a drag in development.
The configuration of the memory depends on the nature of your app.
What are you doing?
What's the amount of transactions precessed?
How much data are you loading?
etc.
etc.
etc
Probably you could profile your app and start cleaning up some modules from your app.
Apparently this can occur after redeploying an application a few times
Tomcat has hot deploy but it consumes memory. Try restarting your container once in a while. Also you will need to know the amount of memory needed to run in production mode, this seems a good time for that research.
They Say that the latest rev of Tomcat (6.0.28 or 6.0.29) handles the task of redeploying servlets much better.
I run into exactly the same problem, but unfortunately none of the suggested solutions really worked for me. The problem did not happen during deployment, and I was neither doing any hot deployments.
In my case the problem occurred every time at the same point during the execution of my web-application, while connecting (via hibernate) to the database.
This link (also mentioned earlier) did provide enough insides to resolve the problem. Moving the jdbc-(mysql)-driver out of the WEB-INF and into the jre/lib/ext/ folder seems to have solved the problem. This is not the ideal solution, since upgrading to a newer JRE would require you to reinstall the driver.
Another candidate that could cause similar problems is log4j, so you might want to move that one as well
First step in such case is to check whether the GC is allowed to unload classes from PermGen. The standard JVM is rather conservative in this regard – classes are born to live forever. So once loaded, classes stay in memory even if no code is using them anymore. This can become a problem when the application creates lots of classes dynamically and the generated classes are not needed for longer periods. In such a case, allowing the JVM to unload class definitions can be helpful. This can be achieved by adding just one configuration parameter to your startup scripts:
-XX:+CMSClassUnloadingEnabled
By default this is set to false and so to enable this you need to explicitly set the following option in Java options. If you enable CMSClassUnloadingEnabled, GC will sweep PermGen too and remove classes which are no longer used. Keep in mind that this option will work only when UseConcMarkSweepGC is also enabled using the below option. So when running ParallelGC or, God forbid, Serial GC, make sure you have set your GC to CMS by specifying:
-XX:+UseConcMarkSweepGC
Assigning Tomcat more memory is NOT the proper solution.
The correct solution is to do a cleanup after the context is destroyed and recreated (the hot deploy). The solution is to stop the memory leaks.
If your Tomcat/Webapp Server is telling you that failed to unregister drivers (JDBC), then unregister them. This will stop the memory leaks.
You can create a ServletContextListener and configure it in your web.xml. Here is a sample ServletContextListener:
import java.sql.Driver;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.Enumeration;
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import org.apache.log4j.Logger;
import com.mysql.jdbc.AbandonedConnectionCleanupThread;
/**
*
* #author alejandro.tkachuk / calculistik.com
*
*/
public class AppContextListener implements ServletContextListener {
private static final Logger logger = Logger.getLogger(AppContextListener.class);
#Override
public void contextInitialized(ServletContextEvent arg0) {
logger.info("AppContextListener started");
}
#Override
public void contextDestroyed(ServletContextEvent arg0) {
logger.info("AppContextListener destroyed");
// manually unregister the JDBC drivers
Enumeration<Driver> drivers = DriverManager.getDrivers();
while (drivers.hasMoreElements()) {
Driver driver = drivers.nextElement();
try {
DriverManager.deregisterDriver(driver);
logger.info(String.format("Unregistering jdbc driver: %s", driver));
} catch (SQLException e) {
logger.info(String.format("Error unregistering driver %s", driver), e);
}
}
// manually shutdown clean up threads
try {
AbandonedConnectionCleanupThread.shutdown();
logger.info("Shutting down AbandonedConnectionCleanupThread");
} catch (InterruptedException e) {
logger.warn("SEVERE problem shutting down AbandonedConnectionCleanupThread: ", e);
e.printStackTrace();
}
}
}
And here you configure it in your web.xml:
<listener>
<listener-class>
com.calculistik.mediweb.context.AppContextListener
</listener-class>
</listener>
"They" are wrong because I'm running 6.0.29 and have the same problem even after setting all of the options. As Tim Howland said above, these options only put off the inevitable. They allow me to redeploy 3 times before hitting the error instead of every time I redeploy.
In case you are getting this in the eclipse IDE, even after setting the parameters
--launcher.XXMaxPermSize, -XX:MaxPermSize, etc, still if you are getting the same error, it most likely is that the eclipse is using a buggy version of JRE which would have been installed by some third party applications and set to default. These buggy versions do not pick up the PermSize parameters and so no matter whatever you set, you still keep getting these memory errors. So, in your eclipse.ini add the following parameters:
-vm <path to the right JRE directory>/<name of javaw executable>
Also make sure you set the default JRE in the preferences in the eclipse to the correct version of java.
The only way that worked for me was with the JRockit JVM. I have MyEclipse 8.6.
The JVM's heap stores all the objects generated by a running Java program. Java uses the new operator to create objects, and memory for new objects is allocated on the heap at run time. Garbage collection is the mechanism of automatically freeing up the memory contained by the objects that are no longer referenced by the program.
I was having similar issue.
Mine is JDK 7 + Maven 3.0.2 + Struts 2.0 + Google GUICE dependency injection based project.
Whenever i tried running mvn clean package command, it was showing following error and "BUILD FAILURE" occured
org.apache.maven.surefire.util.SurefireReflectionException: java.lang.reflect.InvocationTargetException; nested exception is java.lang.reflect.InvocationTargetException: null
java.lang.reflect.InvocationTargetException
Caused by: java.lang.OutOfMemoryError: PermGen space
I tried all the above useful tips and tricks but unfortunately none worked for me.
What worked for me is described step by step below :=>
Go to your pom.xml
Search for <artifactId>maven-surefire-plugin</artifactId>
Add a new <configuration> element and then <argLine> sub element in which pass -Xmx512m -XX:MaxPermSize=256m as shown below =>
<configuration>
<argLine>-Xmx512m -XX:MaxPermSize=256m</argLine>
</configuration>
Hope it helps, happy programming :)