Windsor Performance Counter miss in PerfMon - castle-windsor

I would like to add the performance counter of windsor in my Performance Monitor...
I have configured Windsor as indicated in the docs :
var diagnostic = LifecycledComponentsReleasePolicy.GetTrackedComponentsDiagnostic(container.Kernel);
var counter = LifecycledComponentsReleasePolicy.GetTrackedComponentsPerformanceCounter(new PerformanceMetricsFactory());
container.Kernel.ReleasePolicy = new LifecycledComponentsReleasePolicy(diagnostic, counter);
Then I run my Web Api application, and I run the performance Monitor. But when I try to add a new counter I do not find the "Castle Windsor" section.
What's wrong in what am I doing?
PS.: I am using windsor 4

It is likely Windsor doesn't have permission to create the performance counter category and counter (i.e. write to the registry) and is swallowing the SecurityException in PerformanceMetricsFactory.Initialize.
Run your application or Visual Studio elevated as administrator, you will only need to do this once.

Related

SpringJUnit4ClassRunner customized for multitenant environments

I work on a multi tenant application. Current Tenant information is managed via thread locals, which are set via a filter for the request.
During integration tests (non-web) this filter does not apply, so I look for a method to set this thread local for unit tests.
I started thinking about an annotation on the test-class or methods (including #Before and #After). This could be something like #AsTenant("tenantId") (lets assume we only need the tenant id).
I basically look for a way now to extend SpringJUnit4ClassRunner to be aware of the annotations and properly set the thread locals at the right time. Does anybody have experience or ideas on where to hook this? (I am not very familiar with test runners)
Thanks in advance!

Laravel & PHPUnit : allow process isolation to prevent Mysql Too many connections error

Since four months we build a complex web app with Laravel 4 with a good unit test coverage. Now we have 159 tests and 592 assertions to prevent against regression and allow us to easily refactor our app.
Nice picture but since few days we have the following error in the last tests :
PDOException: SQLSTATE[HY000] [1040] Too many connections
The reason is simple : all tests run in the same process and MySQL allow only a certain number of access in the same time. Now, we have too many tests. If i delete few tests in the middle of my test suite, the last ones pass.
The solution could be to run PHPUnit in process isolation like in the config below but the Laravel tests do not seem to be launched like that. I get an other error in each test :
PHPUnit_Framework_Exception: Notice: Constant LARAVEL_START already defined in /.../.../autoload.php on line 3
<?xml version="1.0" encoding="UTF-8"?>
<phpunit backupGlobals="false"
backupStaticAttributes="false"
bootstrap="bootstrap/autoload.php"
colors="true"
convertErrorsToExceptions="true"
convertNoticesToExceptions="true"
convertWarningsToExceptions="true"
processIsolation="true"
stopOnFailure="false"
syntaxCheck="false"
>
</phpunit>
So my question is : how could I configure Laravel tests to work with processIsolation="true" or do you see an other solution to my problem ?
You can now do DB::connection()->setPdo(null) to close the connection in your tests's tearDown, tha should solve it. If that doesn't work, you can do unset($this->app['db']) in any test extending Laravel's TestCase.
As per http://www.neontsunami.com/posts/too-many-connections-using-phpunit-for-testing-laravel-51
This worked well in Laravel 5.1
public function tearDown()
{
$this->beforeApplicationDestroyed(function () {
DB::disconnect();
});
parent::tearDown();
}
For Laravel 4, you can use \DB::disconnect('connection') in your tearDown() function. See doc here: http://laravel.com/docs/database#accessing-connections
"If you need to disconnect from the given database due to exceeding the underyling PDO instance's max_connections limit, use the disconnect method"
I would take a look at Mocks and remove your MySQL dependency: https://github.com/padraic/mockery#mocking-public-static-methods
Going forward, I would actually suggest focusing more on testing your SQL. My firm recently spent a good amount hiring DBAs that really turned our legacy slowness around.

Creating azure throttling exception

How can we create a throttling exception easily, in a cost effective manner, in sql azure? This is to test the TransientFaultHandling libraries in windows azure.
Found a way! create a mock class derived from ITransientErrorDetectionStrategy and return true. Use this class in
var retryPolicy = new RetryPolicy<MockClass>(retryStrategy);
That is it! No need to create a genuine throttling exception. for more..

Tracing Castle Windsor Resolution of Type

Is there any way to trace exactly what Castle Windsor is doing when resolving a type?
I am looking for a TraceSource name, or log4net (etc.) logger name. If this does not exist where is the best place to hook into the framework to provide my own logging code?
Reason being is we have deployed the exact same build/config of our software to two different virtual servers (both servers created from the same image), and one of them "works" and the other doesnt.
On the failing deployment, our own logs show that a component that was expected to be injected to another is null. On the other machine, the logs show everything is healthy.
I am lost as to why this might happen, and was looking to trace the castle container resolving code.
EDIT:
Running Castle Windsor Release 2.0 on .NET 3.5 SP1
Thanks.
You could try hooking to the kernel's DependencyResolving event:
container.Kernel.DependencyResolving += (componentModel, dependencyModel, dependency) => {};
Or adding a ISubDependencyResolver:
container.Kernel.Resolver.AddSubResolver(new MyDependencyResolver());

Dealing with "java.lang.OutOfMemoryError: PermGen space" error

Recently I ran into this error in my web application:
java.lang.OutOfMemoryError: PermGen space
It's a typical Hibernate/JPA + IceFaces/JSF application running on Tomcat 6 and JDK 1.6.
Apparently this can occur after redeploying an application a few times.
What causes it and what can be done to avoid it?
How do I fix the problem?
The solution was to add these flags to JVM command line when Tomcat is started:
-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled
You can do that by shutting down the tomcat service, then going into the Tomcat/bin directory and running tomcat6w.exe. Under the "Java" tab, add the arguments to the "Java Options" box. Click "OK" and then restart the service.
If you get an error the specified service does not exist as an installed service you should run:
tomcat6w //ES//servicename
where servicename is the name of the server as viewed in services.msc
Source: orx's comment on Eric's Agile Answers.
You better try -XX:MaxPermSize=128M rather than -XX:MaxPermGen=128M.
I can not tell the precise use of this memory pool, but it have to do with the number of classes loaded into the JVM. (Thus enabling class unloading for tomcat can resolve the problem.) If your applications generates and compiles classes on the run it is more likely to need a memory pool bigger than the default.
App server PermGen errors that happen after multiple deployments are most likely caused by references held by the container into your old apps' classloaders. For example, using a custom log level class will cause references to be held by the app server's classloader. You can detect these inter-classloader leaks by using modern (JDK6+) JVM analysis tools such as jmap and jhat to look at which classes continue to be held in your app, and redesigning or eliminating their use. Usual suspects are databases, loggers, and other base-framework-level libraries.
See Classloader leaks: the dreaded "java.lang.OutOfMemoryError: PermGen space" exception, and especially its followup post.
Common mistakes people make is thinking that heap space and permgen space are same, which is not at all true. You could have lot of space remaining in the heap but still can run out of memory in permgen.
Common causes of OutofMemory in PermGen is ClassLoader. Whenever a class is loaded into JVM, all its meta data, along with Classloader, is kept on PermGen area and they will be garbage collected when the Classloader which loaded them is ready for garbage collection. In Case Classloader has a memory leak than all classes loaded by it will remain in memory and cause permGen outofmemory once you repeat it a couple of times. The classical example is Java.lang.OutOfMemoryError:PermGen Space in Tomcat.
Now there are two ways to solve this:
1. Find the cause of Memory Leak or if there is any memory leak.
2. Increase size of PermGen Space by using JVM param -XX:MaxPermSize and -XX:PermSize.
You can also check 2 Solution of Java.lang.OutOfMemoryError in Java for more details.
Use the command line parameter -XX:MaxPermSize=128m for a Sun JVM (obviously substituting 128 for whatever size you need).
Try -XX:MaxPermSize=256m and if it persists, try -XX:MaxPermSize=512m
I added -XX: MaxPermSize = 128m (you can experiment which works best) to VM Arguments as I'm using eclipse ide. In most of JVM, default PermSize is around 64MB which runs out of memory if there are too many classes or huge number of Strings in the project.
For eclipse, it is also described at answer.
STEP 1 : Double Click on the tomcat server at Servers Tab
STEP 2 : Open launch Conf and add -XX: MaxPermSize = 128m to the end of existing VM arguements.
I've been butting my head against this problem while deploying and undeploying a complex web application too, and thought I'd add an explanation and my solution.
When I deploy an application on Apache Tomcat, a new ClassLoader is created for that app. The ClassLoader is then used to load all the application's classes, and on undeploy, everything's supposed to go away nicely. However, in reality it's not quite as simple.
One or more of the classes created during the web application's life holds a static reference which, somewhere along the line, references the ClassLoader. As the reference is originally static, no amount of garbage collecting will clean this reference up - the ClassLoader, and all the classes it's loaded, are here to stay.
And after a couple of redeploys, we encounter the OutOfMemoryError.
Now this has become a fairly serious problem. I could make sure that Tomcat is restarted after each redeploy, but that takes down the entire server, rather than just the application being redeployed, which is often not feasible.
So instead I've put together a solution in code, which works on Apache Tomcat 6.0. I've not tested on any other application servers, and must stress that this is very likely not to work without modification on any other application server.
I'd also like to say that personally I hate this code, and that nobody should be using this as a "quick fix" if the existing code can be changed to use proper shutdown and cleanup methods. The only time this should be used is if there's an external library your code is dependent on (In my case, it was a RADIUS client) that doesn't provide a means to clean up its own static references.
Anyway, on with the code. This should be called at the point where the application is undeploying - such as a servlet's destroy method or (the better approach) a ServletContextListener's contextDestroyed method.
//Get a list of all classes loaded by the current webapp classloader
WebappClassLoader classLoader = (WebappClassLoader) getClass().getClassLoader();
Field classLoaderClassesField = null;
Class clazz = WebappClassLoader.class;
while (classLoaderClassesField == null && clazz != null) {
try {
classLoaderClassesField = clazz.getDeclaredField("classes");
} catch (Exception exception) {
//do nothing
}
clazz = clazz.getSuperclass();
}
classLoaderClassesField.setAccessible(true);
List classes = new ArrayList((Vector)classLoaderClassesField.get(classLoader));
for (Object o : classes) {
Class c = (Class)o;
//Make sure you identify only the packages that are holding references to the classloader.
//Allowing this code to clear all static references will result in all sorts
//of horrible things (like java segfaulting).
if (c.getName().startsWith("com.whatever")) {
//Kill any static references within all these classes.
for (Field f : c.getDeclaredFields()) {
if (Modifier.isStatic(f.getModifiers())
&& !Modifier.isFinal(f.getModifiers())
&& !f.getType().isPrimitive()) {
try {
f.setAccessible(true);
f.set(null, null);
} catch (Exception exception) {
//Log the exception
}
}
}
}
}
classes.clear();
The java.lang.OutOfMemoryError: PermGen space message indicates that the Permanent Generation’s area in memory is exhausted.
Any Java applications is allowed to use a limited amount of memory. The exact amount of memory your particular application can use is specified during application startup.
Java memory is separated into different regions which can be seen in the following image:
Metaspace: A new memory space is born
The JDK 8 HotSpot JVM is now using native memory for the representation of class metadata and is called Metaspace; similar to the Oracle JRockit and IBM JVM's.
The good news is that it means no more java.lang.OutOfMemoryError: PermGen space problems and no need for you to tune and monitor this memory space anymore using Java_8_Download or higher.
Alternatively, you can switch to JRockit which handling permgen differently then sun's jvm. It generally has better performance as well.
http://www.oracle.com/technetwork/middleware/jrockit/overview/index.html
1) Increasing the PermGen Memory Size
The first thing one can do is to make the size of the permanent generation heap space bigger. This cannot be done with the usual –Xms(set initial heap size) and –Xmx(set maximum heap size) JVM arguments, since as mentioned, the permanent generation heap space is entirely separate from the regular Java Heap space,
and these arguments set the space for this regular Java heap space. However, there are similar arguments which can be used(at least with the Sun/OpenJDK jvms) to make the size of the permanent generation heap bigger:
-XX:MaxPermSize=128m
Default is 64m.
2) Enable Sweeping
Another way to take care of that for good is to allow classes to be unloaded so your PermGen never runs out:
-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled
Stuff like that worked magic for me in the past. One thing though, there’s a significant performance trade off in using those, since permgen sweeps will make like an extra 2 requests for every request you make or something along those lines. You’ll need to balance your use with the tradeoffs.
You can find the details of this error.
http://faisalbhagat.blogspot.com/2014/09/java-outofmemoryerror-permgen.html
I had the problem we are talking about here, my scenario is eclipse-helios + tomcat + jsf and what you were doing is making a deploy a simple application to tomcat. I was showing the same problem here, solved it as follows.
In eclipse go to servers tab double click on the registered server in my case tomcat 7.0, it opens my file server General registration information. On the section "General Information" click on the link "Open launch configuration" , this opens the execution of server options in the Arguments tab in VM arguments added in the end these two entries
-XX: MaxPermSize = 512m
-XX: PermSize = 512m
and ready.
The simplest answer these days is to use Java 8.
It no longer reserves memory exclusively for PermGen space, allowing the PermGen memory to co-mingle with the regular memory pool.
Keep in mind that you will have to remove all non-standard -XXPermGen...=... JVM startup parameters if you don't want Java 8 to complain that they don't do anything.
Open tomcat7w from Tomcat's bin directory or type Monitor Tomcat in start menu
(a tabbed window opens with various service information).
In the Java Options text area append this line:
-XX:MaxPermSize=128m
Set Initial Memory Pool to 1024 (optional).
Set Maximum Memory Pool to 1024 (optional).
Click Ok.
Restart the Tomcat service.
Perm gen space error occurs due to the use of large space rather then jvm provided space to executed the code.
The best solution for this problem in UNIX operating systems is to change some configuration on the bash file. The following steps solve the problem.
Run command gedit .bashrc on terminal.
Create JAVA_OTPS variable with following value:
export JAVA_OPTS="-XX:PermSize=256m -XX:MaxPermSize=512m"
Save the bash file. Run command exec bash on the terminal. Restart the server.
I hope this approach will work on your problem. If you use a Java version lower than 8 this issue occurs sometimes. But if you use Java 8 the problem never occurs.
Increasing Permanent Generation size or tweaking GC parameters will NOT help if you have a real memory leak. If your application or some 3rd party library it uses, leaks class loaders the only real and permanent solution is to find this leak and fix it. There are number of tools that can help you, one of the recent is Plumbr, which has just released a new version with the required capabilities.
Also if you are using log4j in your webapp, check this paragraph in log4j documentation.
It seems that if you are using PropertyConfigurator.configureAndWatch("log4j.properties"), you cause memory leaks when you undeploy your webapp.
I have a combination of Hibernate+Eclipse RCP, tried using -XX:MaxPermSize=512m and -XX:PermSize=512m and it seems to be working for me.
Set -XX:PermSize=64m -XX:MaxPermSize=128m. Later on you may also try increasing MaxPermSize. Hope it'll work. The same works for me. Setting only MaxPermSize didn't worked for me.
I tried several answers and the only thing what finally did the job was this configuration for the compiler plugin in the pom:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<fork>true</fork>
<meminitial>128m</meminitial>
<maxmem>512m</maxmem>
<source>1.6</source>
<target>1.6</target>
<!-- prevent PermGen space out of memory exception -->
<!-- <argLine>-Xmx512m -XX:MaxPermSize=512m</argLine> -->
</configuration>
</plugin>
hope this one helps.
jrockit resolved this for me as well; however, I noticed that the servlet restart times were much worse, so while it was better in production, it was kind of a drag in development.
The configuration of the memory depends on the nature of your app.
What are you doing?
What's the amount of transactions precessed?
How much data are you loading?
etc.
etc.
etc
Probably you could profile your app and start cleaning up some modules from your app.
Apparently this can occur after redeploying an application a few times
Tomcat has hot deploy but it consumes memory. Try restarting your container once in a while. Also you will need to know the amount of memory needed to run in production mode, this seems a good time for that research.
They Say that the latest rev of Tomcat (6.0.28 or 6.0.29) handles the task of redeploying servlets much better.
I run into exactly the same problem, but unfortunately none of the suggested solutions really worked for me. The problem did not happen during deployment, and I was neither doing any hot deployments.
In my case the problem occurred every time at the same point during the execution of my web-application, while connecting (via hibernate) to the database.
This link (also mentioned earlier) did provide enough insides to resolve the problem. Moving the jdbc-(mysql)-driver out of the WEB-INF and into the jre/lib/ext/ folder seems to have solved the problem. This is not the ideal solution, since upgrading to a newer JRE would require you to reinstall the driver.
Another candidate that could cause similar problems is log4j, so you might want to move that one as well
First step in such case is to check whether the GC is allowed to unload classes from PermGen. The standard JVM is rather conservative in this regard – classes are born to live forever. So once loaded, classes stay in memory even if no code is using them anymore. This can become a problem when the application creates lots of classes dynamically and the generated classes are not needed for longer periods. In such a case, allowing the JVM to unload class definitions can be helpful. This can be achieved by adding just one configuration parameter to your startup scripts:
-XX:+CMSClassUnloadingEnabled
By default this is set to false and so to enable this you need to explicitly set the following option in Java options. If you enable CMSClassUnloadingEnabled, GC will sweep PermGen too and remove classes which are no longer used. Keep in mind that this option will work only when UseConcMarkSweepGC is also enabled using the below option. So when running ParallelGC or, God forbid, Serial GC, make sure you have set your GC to CMS by specifying:
-XX:+UseConcMarkSweepGC
Assigning Tomcat more memory is NOT the proper solution.
The correct solution is to do a cleanup after the context is destroyed and recreated (the hot deploy). The solution is to stop the memory leaks.
If your Tomcat/Webapp Server is telling you that failed to unregister drivers (JDBC), then unregister them. This will stop the memory leaks.
You can create a ServletContextListener and configure it in your web.xml. Here is a sample ServletContextListener:
import java.sql.Driver;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.Enumeration;
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import org.apache.log4j.Logger;
import com.mysql.jdbc.AbandonedConnectionCleanupThread;
/**
*
* #author alejandro.tkachuk / calculistik.com
*
*/
public class AppContextListener implements ServletContextListener {
private static final Logger logger = Logger.getLogger(AppContextListener.class);
#Override
public void contextInitialized(ServletContextEvent arg0) {
logger.info("AppContextListener started");
}
#Override
public void contextDestroyed(ServletContextEvent arg0) {
logger.info("AppContextListener destroyed");
// manually unregister the JDBC drivers
Enumeration<Driver> drivers = DriverManager.getDrivers();
while (drivers.hasMoreElements()) {
Driver driver = drivers.nextElement();
try {
DriverManager.deregisterDriver(driver);
logger.info(String.format("Unregistering jdbc driver: %s", driver));
} catch (SQLException e) {
logger.info(String.format("Error unregistering driver %s", driver), e);
}
}
// manually shutdown clean up threads
try {
AbandonedConnectionCleanupThread.shutdown();
logger.info("Shutting down AbandonedConnectionCleanupThread");
} catch (InterruptedException e) {
logger.warn("SEVERE problem shutting down AbandonedConnectionCleanupThread: ", e);
e.printStackTrace();
}
}
}
And here you configure it in your web.xml:
<listener>
<listener-class>
com.calculistik.mediweb.context.AppContextListener
</listener-class>
</listener>
"They" are wrong because I'm running 6.0.29 and have the same problem even after setting all of the options. As Tim Howland said above, these options only put off the inevitable. They allow me to redeploy 3 times before hitting the error instead of every time I redeploy.
In case you are getting this in the eclipse IDE, even after setting the parameters
--launcher.XXMaxPermSize, -XX:MaxPermSize, etc, still if you are getting the same error, it most likely is that the eclipse is using a buggy version of JRE which would have been installed by some third party applications and set to default. These buggy versions do not pick up the PermSize parameters and so no matter whatever you set, you still keep getting these memory errors. So, in your eclipse.ini add the following parameters:
-vm <path to the right JRE directory>/<name of javaw executable>
Also make sure you set the default JRE in the preferences in the eclipse to the correct version of java.
The only way that worked for me was with the JRockit JVM. I have MyEclipse 8.6.
The JVM's heap stores all the objects generated by a running Java program. Java uses the new operator to create objects, and memory for new objects is allocated on the heap at run time. Garbage collection is the mechanism of automatically freeing up the memory contained by the objects that are no longer referenced by the program.
I was having similar issue.
Mine is JDK 7 + Maven 3.0.2 + Struts 2.0 + Google GUICE dependency injection based project.
Whenever i tried running mvn clean package command, it was showing following error and "BUILD FAILURE" occured
org.apache.maven.surefire.util.SurefireReflectionException: java.lang.reflect.InvocationTargetException; nested exception is java.lang.reflect.InvocationTargetException: null
java.lang.reflect.InvocationTargetException
Caused by: java.lang.OutOfMemoryError: PermGen space
I tried all the above useful tips and tricks but unfortunately none worked for me.
What worked for me is described step by step below :=>
Go to your pom.xml
Search for <artifactId>maven-surefire-plugin</artifactId>
Add a new <configuration> element and then <argLine> sub element in which pass -Xmx512m -XX:MaxPermSize=256m as shown below =>
<configuration>
<argLine>-Xmx512m -XX:MaxPermSize=256m</argLine>
</configuration>
Hope it helps, happy programming :)