what is the absolute minimal mocking that must be done to pass this test?
code:
class PrivateStaticFinal {
private static final Integer variable = 0;
public static Integer method() { return variable + 1; }
}
test:
#RunWith(PowerMockRunner.class)
#PrepareForTest(PrivateStaticFinal.class)
class PrivateStaticFinalTest {
#Test
public void testMethod() {
//TODO PrivateStaticFinal.variable = 100
assertEquals(PrivateStaticFinal.method(), 101);
}
}
related: Mock private static final variables in the testing class (no clear answer)
Disclaimer: After a lot of hunting around on various threads I have found an answer. It can be done, but the general concensus is that it is not very safe but seeing as how you are doing this ONLY IN UNIT TESTS, I think you accept those risks :)
The answer is not Mocking, since most Mocking does not allow you to hack into a final. The answer is a little more "hacky", where you are actually modifying the private field when Java is calling is core java.lang.reflect.Field and java.lang.reflect.Modifier classes (reflection). Looking at this answer I was able to piece together the rest of your test, without the need for mocking that solves your problem.
The problem with that answer is I was running into NoSuchFieldException when trying to modify the variable. The help for that lay in another post on how to access a field that was private and not public.
Reflection/Field Manipulation Explained:
Since Mocking cannot handle final, instead what we end up doing is hacking into the root of the field itself. When we use the Field manipulations (reflection), we are looking for the specific variable inside of a class/object. Once Java finds it we get the "modifiers" of it, which tell the variable what restrictions/rules it has like final, static, private, public, etc. We find the right variable, and then tell the code that it is accessible which allows us to change these modifiers. Once we have changed the "access" at the root to allow us to manipulate it, we are toggling off the "final" part of it. We then can change the value and set it to whatever we need.
To put it simply, we are modifying the variable to allow us to change its properties, removing the propety for final, and then changing the value since it is no longer final. For more info on this, check out the post where the idea came from.
So step by step we pass in the variable we want to manipulate and...
// Mark the field as public so we can toy with it
field.setAccessible(true);
// Get the Modifiers for the Fields
Field modifiersField = Field.class.getDeclaredField("modifiers");
// Allow us to change the modifiers
modifiersField.setAccessible(true);
// Remove final modifier from field by blanking out the bit that says "FINAL" in the Modifiers
modifiersField.setInt(field, field.getModifiers() & ~Modifier.FINAL);
// Set new value
field.set(null, newValue);
Combining this all into a new SUPER ANSWER you get.
#RunWith(PowerMockRunner.class)
#PrepareForTest()
class PrivateStaticFinalTest {
#Test
public void testMethod(){
try {
setFinalStatic(PrivateStaticFinal.class.getDeclaredField("variable"), Integer.valueOf(100));
}
catch (SecurityException e) {fail();}
catch (NoSuchFieldException e) {fail();}
catch (Exception e) {fail();}
assertEquals(PrivateStaticFinal.method(), Integer.valueOf(101));
}
static void setFinalStatic(Field field, Object newValue) throws Exception {
field.setAccessible(true);
// remove final modifier from field
Field modifiersField = Field.class.getDeclaredField("modifiers");
modifiersField.setAccessible(true);
modifiersField.setInt(field, field.getModifiers() & ~Modifier.FINAL);
field.set(null, newValue);
}
}
Update
The above solution will work only for those constants which is initialized in static block.When declaring and initializing the constant at the same time, it can happen that the compiler inlines it, at which point any change to the original value is ignored.
Related
I want to use ASM to verify how often certain methods are called and what their arguments and result is. However, at runtime it ends with a java.lang.LinkageError: loader (instance of sun/misc/Launcher$AppClassLoader): attempted duplicate class definition for name: "com/foo/bar/DefaultType".
For that reason I want to ensure that it is not an ASM (Objectweb) problem, so it tried to just pass the bytes without any modification with the following code:
#Override
public byte[] transform(ClassLoader loader, String className, Class<?> classBeingRedefined,
ProtectionDomain protectionDomain, byte[] classfileBuffer)
throws IllegalClassFormatException {
byte[] result;
if(className.startsWith("com/foo/bar"))
{
ClassReader reader = new ClassReader(classfileBuffer);
try
{
ClassWriter writer = new ClassWriter(ClassWriter.COMPUTE_FRAMES);
reader.accept(writer, 0);
result = writer.toByteArray();
}
catch(Exception e)
{
result = null;
}
}
else
{
// do nothing
result = null;
}
return result;
}
But even after this modification I get the same Error. Any hints what I should change to get this code working?
Late answer to an old question.
One way these errors can occur is due to how the COMPUTE_FRAMES option of ClassWriter is implemented. In particular, the frame computation will sometimes need to figure out a common superclass for two given classes; to do this, it will load the classes it is interested in using Class.forName. If your codebase uses a non-trivial class loading setup, it may happen that a class gets loaded in an unpexected class loader this way (I can't recall the precise conditions, but I have had this happen to me). The solution is to override the getCommonSuperclass method of ClassWriter to perform the same computation in a safer way.
My question is specific to as3.
When I use this language, it seems to me that any variable with a getter and setter should be made public instead.
Whether you do this :
public class Test
{
private var _foo:String;
public function Test()
{
foo = "";
}
public function get foo():String
{
return _foo;
}
public function set foo(value:String):void
{
_foo = value;
}
}
or this :
public class Test
{
public var foo:String;
public function Test()
{
foo = "";
}
}
you will end up doing this eventually (to get or set your foo variable from another class) :
testObject.foo
And using a public variable looks much cleaner to me.
I know that I am missing something.
Could you please show me what it is?
Before we continue, understand that when you define getters and setters, they don't actually need to be associated with a property defined within the class. Getters simply have to return a value, and setters have to accept a value (but can do absolutely nothing if you wish).
Now to answer the question:
The most simple reason is that you can make properties read or write only, by declaring one without the other. In regards to read only, take a moment to consider the benefits of having a class expose a value without other parts of your application being able to modify it. As an example:
public class Person
{
public var firstName:String = "Marty";
public var lastName:String = "Wallace";
public function get fullName():String
{
return firstName + " " + lastName;
}
}
Notice that the property fullName is the result of firstName and lastName. This gives a consistent, accurate value that you would expect if firstName or lastName were to be modified:
person.firstName = "Daniel";
trace(person.fullName); // Daniel Wallace
If fullName was actually a public variable alongside the other two, you would end up with unexpected results like:
person.fullName = "Daniel Wallace";
trace(person.firstName); // Marty - Wait, what?
With that out of the way, notice that getters and setters are functions. Realize that a function can contain more than one line of code. This means that your getters and setters can actually do a lot of things on top of simply getting and setting a value - like validation, updating other values, etc. For example:
public class Slideshow
{
private var _currentSlide:int = 0;
private var _slides:Vector.<Sprite> = new <Sprite>[];
public function set currentSlide(value:int):void
{
_currentSlide = value;
if(_currentSlide < 0) _currentSlide = _slides.length - 1;
if(_currentSlide >= _slides.length) _currentSlide = 0;
var slide:Sprite = _slides[_currentSlide];
// Do something with the new slide, like transition to it.
//
}
public function get currentSlide():int
{
return _currentSlide;
}
}
Now we can transition between slides in the slideshow with a simple:
slideshow.currentSlide = 4;
And even continuously loop the slideshow with consistent use of:
slideshow.currentSlide ++;
There are actually many good reasons to consider using accessors rather than directly exposing fields of a class - beyond just the argument of encapsulation and making future changes easier.
Here are some of the reasons:
Encapsulation of behavior associated with getting or setting the property
this allows additional functionality (like validation) to be added more easily later.
Hiding the internal representation of the property while exposing a property using an alternative representation.
Insulating your public interface from change allowing the public interface to remain constant while the implementation changes without affecting existing consumers.
Controlling the lifetime and memory management (disposal) semantics of the property particularly important in non-managed memory environments (like C++ or Objective-C).
Providing a debugging interception point for when a property changes at runtime - debugging when and where a property changed to a particular value can be quite difficult without this in some languages.
Improved interoperability with libraries that are designed to operate against property getter/settersMocking, Serialization, and WPF come to mind.
Allowing inheritors to change the semantics of how the property behaves and is exposed by overriding the getter/setter methods.
Allowing the getter/setter to be passed around as lambda expressions rather than values.
Getters and setters can allow different access levels for example the get may be public, but the set could be protected.
This is a similar pattern to ones stated elsewhere and detailed in this blog post. I have this working using Windsor 2.5.4 pretty much as stated in the blogpost, but decided to switch to using Windsor 3. When I did this I noticed that the memory usage of the application go up over time - I guessed this would be that components were'nt being released.
There were a couple of modifications to the code in the blogpost, which may have caused the behaviour to differ.
Here is my AutoRelease interceptor (straight out of the blogpost, here for convenience and the lazy ;) )
[Transient]
public class AutoReleaseHandlerInterceptor : IInterceptor
{
private static readonly MethodInfo Execute = typeof(IDocumentHandler).GetMethod("Process");
private readonly IKernel _kernel;
public AutoReleaseHandlerInterceptor(IKernel kernel)
{
_kernel = kernel;
}
public void Intercept(IInvocation invocation)
{
if (invocation.Method != Execute)
{
invocation.Proceed();
return;
}
try
{
invocation.Proceed();
}
finally
{
_kernel.ReleaseComponent(invocation.Proxy);
}
}
}
One of my deviations from the blog post is the selector that the typed factory uses:-
public class ProcessorSelector : DefaultTypedFactoryComponentSelector
{
protected override Func<IKernelInternal, IReleasePolicy, object> BuildFactoryComponent(MethodInfo method,
string componentName,
Type componentType,
IDictionary additionalArguments)
{
return new MyDocumentHandlerResolver(componentName,
componentType,
additionalArguments,
FallbackToResolveByTypeIfNameNotFound,
GetType()).Resolve;
}
protected override string GetComponentName(MethodInfo method, object[] arguments)
{
return null;
}
protected override Type GetComponentType(MethodInfo method, object[] arguments)
{
var message = arguments[0];
var handlerType = typeof(IDocumentHandler<>).MakeGenericType(message.GetType());
return handlerType;
}
}
What might be noticeable is that I do not use the default resolver. (This is where, perhaps, the problem lies...).
public class MyDocumentHandlerResolver : TypedFactoryComponentResolver
{
public override object Resolve(IKernelInternal kernel, IReleasePolicy scope)
{
return kernel.Resolve(componentType, additionalArguments, scope);
}
}
(I omitted the ctor for brevity- nothing special happens there, it just calls the base ctor).
The reason I did this was because the default resolver would try to resolve by name and not by Type- and fail. In this case, I know I only ever need to resolve by type, so I just overrode the Resolve method.
The final piece of the puzzle will be the installer.
container.AddFacility<TypedFactoryFacility>()
.Register(
Component.For<AutoReleaseHandlerInterceptor>(),
Component.For<ProcessorSelector>().ImplementedBy<ProcessorSelector>(),
Classes.FromAssemblyContaining<MessageHandler>()
.BasedOn(typeof(IDocumentHandler<>))
.WithService.Base()
.Configure(c => c.LifeStyle.Is(LifestyleType.Transient)),
Component.For<IDocumentHandlerFactory>()
.AsFactory(c => c.SelectedWith<ProcessorSelector>()));
Stepping through the code, the interceptor is called and the finally clause is executed (e.g. I didn't get the method name wrong). However, the component does not seem to be released (using the performance counter shows this. Each invocation of the factory's create method increases the counter by one).
So far, my workaround has been to add a void Release(IDocumentHandler handler) method to my factory interface, and then after it executes the handler.Process() method, it explicitly releases the handler instance, and this seems to do the job- the performance counter goes up, and as the processing is done, it goes down).
Here is the factory:
public interface IDocumentHandlerFactory
{
IDocumentHandler GetHandlerForDocument(IDocument document);
void Release(IDocumentHandler handler);
}
And here is how I use it:
IDocumentHandlerFactory handler = _documentHandlerFactory.GetHandlerForDocument(document);
handler.Process();
_documentHandlerFactory.Release(handler);
Doing the Release explicitly therefore negates the need for the interceptor, but my real question is why this behaviour differs between the releases?
Note to self:- RTFM. Or in fact, read the Breakingchanges.txt file.
Here's the change that affects this behaviour (emphasis is mine):-
change - IReleasePolicy interface has a new method: IReleasePolicy
CreateSubPolicy(); usage of sub-policies changes how typed factories
handle out-of-band-release of components (see description)
impact - medium fixability - easy
description - This was added as an attempt to enable more fine grained
lifetime scoping (mostly for per-typed-factory right now, but in the
future also say - per-window in client app). As a side-effect of that
(and change to release policy behavior described above) it is no
longer possible to release objects resolved via typed factories, using
container.Release. As the objects are now tracked only in the scope
of the factory they will be released only if a call to factory
releasing method is made, or when the factory itself is released.
fix - Method should return new object that exposes the same behavior
as the 'parent' usually it is just best to return object of the same
type (as the built-in release policies do).
I didn't find the fix suggestion terribly helpful in my instance, however my solution in the question is what you should actually do (release using the factory). I'll leave it up in case anyone else has this (non) issue.
So I have made this simple interface:
package{
public interface GraphADT{
function addNode(newNode:Node):Boolean;
}
}
I have also created a simple class Graph:
package{
public class Graph implements GraphADT{
protected var nodes:LinkedList;
public function Graph(){
nodes = new LinkedList();
}
public function addNode (newNode:Node):Boolean{
return nodes.add(newNode);
}
}
last but not least I have created another simple class AdjacancyListGraph:
package{
public class AdjacancyListGraph extends Graph{
public function AdjacancyListGraph(){
super();
}
override public function addNode(newNode:AwareNode):Boolean{
return nodes.add(newNode);
}
}
Having this setup here is giving me errors, namely:
1144: Interface method addNode in namespace GraphADT is implemented with an incompatible signature in class AdjacancyListGraph.
Upon closer inspection it was apparent that AS3 doesn't like the different parameter types from the different Graph classes newNode:Node from Graph , and newNode:AwareNode from AdjacancyListGraph
However I don't understand why that would be a problem since AwareNode is a subClass of Node.
Is there any way I can make my code work, while keeping the integrity of the code?
Simple answer:
If you don't really, really need your 'addNode()' function to accept only an AwareNode, you can just change the parameter type to Node. Since AwareNode extends Node, you can pass in an AwareNode without problems. You could check for type correctness within the function body :
subclass... {
override public function addNode (node:Node ) : Boolean {
if (node is AwareNode) return nodes.add(node);
return false;
}
}
Longer answer:
I agree with #32bitkid that your are getting an error, because the parameter type defined for addNode() in your interface differs from the type in your subclass.
However, the main problem at hand is that ActionScript generally does not allow function overloading (having more than one method of the same name, but with different parameters or return values), because each function is treated like a generic class member - the same way a variable is. You might call a function like this:
myClass.addNode (node);
but you might also call it like this:
myClass["addNode"](node);
Each member is stored by name - and you can always use that name to access it. Unfortunately, this means that you are only allowed to use each function name once within a class, regardless of how many parameters of which type it takes - nothing comes without a price: You gain flexibility in one regard, you lose some comfort in another.
Hence, you are only allowed to override methods with the exact same signature - it's a way to make you stick to what you decided upon when you wrote the base class. While you could obviously argue that this is a bad idea, and that it makes more sense to use overloading or allow different signatures in subclasses, there are some advantages to the way that AS handles functions, which will eventually help you solve your problem: You can use a type-checking function, or even pass one on as a parameter!
Consider this:
class... {
protected function check (node:Node) : Boolean {
return node is Node;
}
public function addNode (node:Node) : Boolean {
if (check(node)) return nodes.add(node);
return false;
}
}
In this example, you could override check (node:Node):
subclass... {
override protected function check (node:Node) : Boolean {
return node is AwareNode;
}
}
and achieve the exact same effect you desired, without breaking the interface contract - except, in your example, the compiler would throw an error if you passed in the wrong type, while in this one, the mistake would only be visible at runtime (a false return value).
You can also make this even more dynamic:
class... {
public function addNode (node:Node, check : Function ) : Boolean {
if (check(node)) return nodes.add(node);
return false;
}
}
Note that this addNode function accepts a Function as a parameter, and that we call that function instead of a class method:
var f:Function = function (node:Node) : Boolean {
return node is AwareNode;
}
addNode (node, f);
This would allow you to become very flexible with your implementation - you can even do plausibility checks in the anonymous function, such as verifying the node's content. And you wouldn't even have to extend your class, unless you were going to add other functionality than just type correctness.
Having an interface will also allow you to create implementations that don't inherit from the original base class - you can write a whole different class hierarchy, it only has to implement the interface, and all your previous code will remain valid.
I guess the question is really this: What are you trying to accomplish?
As to why you are getting an error, consider this:
public class AnotherNode extends Node { }
and then:
var alGraph:AdjacancyListGraph = new AdjacancyListGraph();
alGraph.addNode(new AnotherNode());
// Wont work. AnotherNode isn't compatable with the signature
// for addNode(node:AwareNode)
// but what about the contract?
var igraphADT:GraphADT = GraphADT(alGraph);
igraphADT.addNode(new AnotherNode()); // WTF?
According to the interface this should be fine. But your implemenation says otherwise, your implemenation says that it will only accept a AwareNode. There is an obvious mismatch. If you are going to have an interface, a contract that your object should follow, then you might as well follow it. Otherwise, whats the point of the interface in the first place.
I submit that architecture messed up somewhere if you are trying to do this. Even if the language were to support it, I would say that its a "Bad Idea™"
There's an easier way, then suggested above, but less safe:
public class Parent {
public function get foo():Function { return this._foo; }
protected var _foo:Function = function(node:Node):void { ... }}
public class Child extends Parent {
public function Child() {
super();
this._foo = function(node:AnotherNode):void { ... }}}
Of course _foo needs not be declared in place, the syntax used is for shortness and demonstration purposes only.
You will loose the ability of the compiler to check types, but the runtime type matching will still apply.
Yet another way to go about it - don't declare methods in the classes they specialize on, rather make them static, then you will not inherit them automatically:
public class Parent {
public static function foo(parent:Parent, node:Node):Function { ... }}
public class Child extends Parent {
public static function foo(parent:Child, node:Node):Function { ... }}
Note that in second case protected fields are accessible inside the static method, so you can achieve certain encapsulation. Besides, if you have a lot of Parent or Child instances, you will save on individual instance memory footprint (as static methods therefore static there exists only one copy of them, but instance methods would be copied for each instance). The disadvantage is that you won't be able to use interfaces (can be actually an improvement... depends on your personal preferences).
I have a service class which has overloaded constructors. One constructor has 5 parameters and the other has 4.
Before I call,
var service = IoC.Resolve<IService>();
I want to do a test and based on the result of this test, resolve service using a specific constructor. In other words,
bool testPassed = CheckCertainConditions();
if (testPassed)
{
//Resolve service using 5 paramater constructor
}
else
{
//Resolve service using 4 parameter constructor
//If I use 5 parameter constructor under these conditions I will have epic fail.
}
Is there a way I can specify which one I want to use?
In general, you should watch out for ambiguity in constructors when it comes to DI because you are essentially saying to any caller that 'I don't really care if you use one or the other'. This is unlikely to be what you intended.
However, one container-agnostic solution is to wrap the conditional implementation into another class that implements the same interface:
public class ConditionalService : IService
{
private readonly IService service;
public ConditionalService()
{
bool testPassed = CheckCertainConditions();
if (testPassed)
{
// assign this.service using 5 paramater constructor
}
else
{
// assign this.service using 4 parameter constructor
}
}
// assuming that IService has a Foo method:
public IBaz Foo(IBar bar)
{
return this.service.Foo(bar);
}
}
If you can't perform the CheckCertainConditions check in the constructor, you can use lazy evaluation instead.
It would be a good idea to let ConditionalService request all dependencies via Constructor Injection, but I left that out of the example code.
You can register ConditionalService with the DI Container instead of the real implementation.
My underlying problem was that I was trying to resolve my class which had the following signature:
public DatabaseSchemaSynchronisationService(IDatabaseService databaseService, IUserSessionManager userSessionManager)
This was basically useless to me because my usersessionmanager had no active NHibernate.ISession because a connection to my database had not yet been made. What I was trying to do was check if I did have a connection and only then resolve this class which served as a service to run database update scripts.
When changing my whole class to perform the scripts in a different way, all I needed in its constructor's signature was:
public DatabaseSchemaSynchronisationService(ISessionFactory sessionFactory)
This allowed me to open my own session. I did, however have to first check if the connection was ready before attempting to resolve the class, but having IDatabaseSchemaSynchronisationService as a parameter to another class's constructor; this class also gettting resolved somewhere where I could not check the db connection was a bad idea.
Instead in this second class, I took the IDatabaseSchemaSynchronisationService paramater out of the constructor signature and made it a local variable which only gets instantiated (resolved) :
if (connectionIsReady)
Thanks to everyone who answered.