Junit testing in java for void methods - junit

How to write test case for this function in a binary search tree ?
void insert(String key){
root=insertRec(root,key);
}

Your method does something. It obviously changes the state of the object by inserting a rec(ord?) and somehow re-evaluating what the root is. So, to test it, you should somehow be able to determine the new state, for example...
public void insert_should_create_new_record_and_set_root() {
assertThat( myObject.getRec( originalRoot) ).isNull();
Object originalRoot = myObject.getRoot();
myObject.insert("xyz");
assertThat( myObject.getRec( originalRoot) ).isEqualTo( "xyz"); // using AssertJ style here
assertThat( myObject.getRoot().value() ).isNotEqualTo( originalRoot );
}
If, on the other hand, you have no way to check the state from the outside, then you'll have a problem. But somehow your class has to communicate to the outside, hasn't it? If you really think that you cannot check the new state, then you'll have to provide more code of this class, as this answer is, of course, very general (which means "guessing", here).

Related

Skip subtree in Listener ANTLR4

Is there any Way to skip the parsing of specific block while using Listener in ANTLR4 using enter or exit method.
I have read link here but unable to make it work.
Thank You!
By the time you're using the Listener pattern with your own Listener class, the input is already correctly lexed and parsed. Therefore, the answer to your question is no. When you're using the listener you're typically just walking the tree post-parse.
Does that mean all is lost though? Of course not. All you have to do is simply not code the Enter or Exit events for those constructs you want to "ignore." It's that easy.
As to if-else statements, I've always implemented them using the visitor pattern like this:
As to how to program an if statement, I'll give you a peek at they way I implement them:
public override MuValue VisitIfstmt(LISBASICParser.IfstmtContext context)
{
LISBASICParser.Condition_blockContext[] conditions = context.condition_block();
bool evaluatedBlock = false;
foreach (LISBASICParser.Condition_blockContext condition in conditions)
{
MuValue evaluated = Visit(condition.expr());
if (evaluated.AsBoolean())
{
evaluatedBlock = true;
Visit(condition.stmt_block());
break;
}
}
if (!evaluatedBlock && context.stmt_block() != null)
{
Visit(context.stmt_block());
}
return MuValue.Void;
}
Granted, this probably doesn't make much sense out of context, but rest assured it works. To see this in its full context, please visit Bart Kiers for an excellent example of grammar and implementation .

LibGDX assigning a specific shader to a ModelInstance

I have recently been learning an implementing my own shaders in libgdx.
So far I did this with a custom shader provider, which chooses between a few shaders based on the userdata value of the object;
public class MyShaderProvider extends DefaultShaderProvider {
public final DefaultShader.Config config;
final static String logstag = "ME.MyShaderProvider";
//known shaders
static public enum shadertypes {
prettynoise,
invert,
standardlibgdx,
noise,
distancefield,
conceptbeam
}
public MyShaderProvider (final DefaultShader.Config config) {
this.config = (config == null) ? new DefaultShader.Config() : config;
}
public MyShaderProvider (final String vertexShader, final String fragmentShader) {
this(new DefaultShader.Config(vertexShader, fragmentShader));
}
public MyShaderProvider (final FileHandle vertexShader, final FileHandle fragmentShader) {
this(vertexShader.readString(), fragmentShader.readString());
}
public MyShaderProvider () {
this(null);
}
public void testListShader(Renderable instance){
for (Shader shader : shaders) {
Gdx.app.log(logstag, "shader="+shader.getClass().getName());
Gdx.app.log(logstag, "can render="+shader.canRender(instance));
}
}
#Override
protected Shader createShader (final Renderable renderable) {
//pick shader based on renderables userdata?
shadertypes shaderenum = (shadertypes) renderable.userData;
if (shaderenum==null){
return super.createShader(renderable);
}
Gdx.app.log(logstag, "shaderenum="+shaderenum.toString());
switch (shaderenum) {
case prettynoise:
{
return new PrettyNoiseShader();
}
case invert:
{
String vert = Gdx.files.internal("shaders/invert.vertex.glsl").readString();
String frag = Gdx.files.internal("shaders/invert.fragment.glsl").readString();
return new DefaultShader(renderable, new DefaultShader.Config(vert, frag));
}
case noise:
{
return new NoiseShader();
}
case conceptbeam:
{
Gdx.app.log(logstag, "creating concept gun beam ");
return new ConceptBeamShader();
}
case distancefield:
{
return new DistanceFieldShader();
}
default:
return super.createShader(renderable);
}
//return new DefaultShader(renderable, new DefaultShader.Config());
}
}
This seemed to work.
I have an object with a noise shader applied, animated fine.
I have an object with a inverted textured shader, again looking fine.
I have a whole bunch of other objects being rendered with the normal default shader.
It seems the provider as I have set it up is correctly rendering different objects with different shaders based on userData.
However,I recently found a new object I created with a new shader type (ConceptBeamShader) is only being rendered with the Default shader.
The objects user data is set the same as the others;
newlazer.userData = MyShaderProvider.shadertypes.conceptbeam;
However, at no point does the conceptbeamshader get created or used.
In fact createShader() doesn't seem to run for it at all...implying that an existing shader in the shaders array is good enough.
Using the testListShader() function above I see "DefaultShader" is in the "shader" list, which canRender anything, and thus it never gets to creating that new shader I want that object to use :-/
I assume the other shaders only got picked before because those objects were created before DefaultShader got added to that internal shader list.
Surely as soon as a DefaultShader is used, it gets stored in that provider list and will "gobble up" any other shaders. The getShader function in the class MyShaderProvider extends is;
public Shader getShader (Renderable renderable) {
Shader suggestedShader = renderable.shader;
if (suggestedShader != null && suggestedShader.canRender(renderable)) return suggestedShader;
for (Shader shader : shaders) {
if (shader.canRender(renderable)) return shader;
}
final Shader shader = createShader(renderable);
shader.init();
shaders.add(shader);
return shader;
}
As you can see the shaders are looped over and the first one which returns true for "canRender" is used.
So...umm...how exactly are you supposed to say "render this ModelInstance with this shader" ?
None of the tutorials I have read online seemed to cover this - in fact the one on the official site seems to recommend exactly what I am doing so theres clearly something I am missing.
Thanks,
edit
The place it was instanced was asked for. Not sure how this helps but here;
public static MyShaderProvider myshaderprovider = new MyShaderProvider();
Its then assigned to the modelbatch at the games setup
modelBatch = new ModelBatch(myshaderprovider);
As mentioned, my other shaders are working and visible on the objects I assigned the matching userdata too, so I am 99.9% sure the provider is being called and is, at least in some cases, picking the right shader for the right object.
My hunch where its going wrong is as soon as "DefaultShader" gets added to the internal shader list.
There are several ways to specify the Shader to use for a ModelInstance. One of which is to specify the Shader when calling the render method on the ModelBatch:
modelBatch.render(modelInstance, shader);
This will hint the ModelBatch to use this shader, which it will almost always do, unless the specified Shader isn't suitable to render. Whether a Shader is suitable (and should be used) to render the ModelInstance is determined by the call to Shader#canRender(Renderable).
Note the difference between the Renderable and ModelInstance. This is because a single ModelInstance can consist of multiple parts (nodes), each which might need another Shader. For example when you have car model, then it might consist of the opaque chassis and transparent windows. This will require a different shader for the windows and the chassis.
Therefore specifying a Shader for an entire ModelInstance isn't always very useful. Instead you might need to have more control over which Shader is used for each specific part of the model (each render call). For this you can implement the ShaderProvider interface. Which allows you to use whichever Shader you like for each Renderable. Ofcourse you should make sure that the Shader#canRender(Renderable) method of the Shader you use returns true for the specified Renderable.
It can be useful to extend the DefaultShaderProvider so you can fall back on the DefaultShader when you don't need a custom shader. In that case you must make sure that there's an unambiguous and consistent distinction between when the default shader should be used and when a custom shader should be used. That is, the DefaultShader#canRender method should not return true when a custom shader should be used and your customshader#canRender method should not return true when the DefaultShader should be used. (on itself this isn't specific to custom or default shader, you always need to know which shader to use)
You are trying to use ModelInstance#userData to distinct between a custom and default shader. There are two issues with this:
The userData is the same for every Renderable of the ModelInstance. So practically you over complicating your design at no gain. You might as well use modelBatch.render(modelInstance, shader).
The DefaultShader is and can't be aware of any user specific data. It simply looks at the information it is aware of (the material, mesh, environment, etc.) and return true in canRender if it should be used to render based on that info.
To solve the second point, the libGDX 3D API comes with attributes (used for both environment and material). By design these allow you to compare a Shader and Renderable with just two numbers, which are bitwise masks of the attributes. Therefore the preferred, easiest and fastest method is to use a custom attribute. This not only let's you unambiguously identify which shader to use, but also let you specify the required information to use the shader (there's a reason you want to use a different shader).
An example of how to do that can be found here and here.

What are _loc_ variables when swfs are decompiled into AS3?

When running swfs through decompilers (my own swfs, not somebody else's), I've noticed a lot of mention of certain variables:
_loc_1
_loc_2
_loc_3
.
.
.
_loc_n
As in the following example:
private function templateFilterFunction(param1) : Boolean
{
var _loc_2:* = false;
if (filterFunction != null)
{
_loc_2 = filterFunction(param1, typedText);
}
return _loc_2;
}
Alright, so these are apparently just normal variables then, right? And they may have had more descriptive names in the original AS3 code, but that's been lost in the bytecode, and now we have the same variables as before, just with non-descript names, right?
Not exactly. For instance:
package
{
public class SomeClass extends Object
{
public var var1:Number;
public var var2:Number;
public var var3:Number;
public function SomeClass(param1:Number, param2:Number, param3:Number)
{
if (!_loc_5)
{
if (!_loc_4)
{
var3 = param1;
if (!_loc_4)
{
var1 = param2;
}
}
}
var2 = param3;
return;
}// end function
}
}
These aren't declared. But they're not exactly members of Object either, and I've never seen them outside of a swf decompilation. What are they then? Thanks.
Not sure about that particular piece of code, but the decompilers I've used (as far as I remember) all call the local variables loc_n, local_n or something like that.
I think you already know why. Local variables are created and pushed onto the execution stack; they are not referenced from outside the local scope and since they are not callable by name, their names are just whipped off the bytecode. (The object pointed by the variable could be allocated on the heap and live outside the scope of the function, however, but that's not the point here).
Now, another thing you might be aware of is that some bytecode generated by the compiler just doesn't traslate to actionscript code. There are things that can be done in bytecode that are not really possible in AS code; an example, off the top of my head: the "dup" opcode (duplicates a value and pushes it onto the stack). There are others (jumps, noops, etc). Reversing this to the original source code is sometimes not possible.
There are other interesting cases such as loops. You may notice that a particular decompiler tends to generate "for loops" (or "while loops") regardless of whether the source code had a for or a while. That's because loops are a higher level construct that are usually implemented in bytcode as conditional jumps. If you want to reverse the bytecode to AS code, you just have to pick a flavor because the loop (as an AS construct) is just not there.
That said, I've seen some decompilers (can't remember which one now) generating invalid or non-sensical source code. To me, that's the case in the example you post. I may be wrong but it seems like the _loc_5 and _loc_4 vars are just gibberish and the original code must be something like:
public function SomeClass(param1:Number, param2:Number, param3:Number):void
{
var var3:Number = param1;
var var1:Number = param2;
var var2:Number = param3;
}

How to do CreateBindingSet() on Windows Phone?

In the N+1 video #34 (Progress), there was an example of using CreateBindingSet() for the Android version, which is not typical. But the narrator also mentioned briefly that the same can be done on the Windows platform.
As much as I tried, however, I am unable to get a View's property to be bound to its ModelView on the Windows Phone. I always get a NullReferenceException.
The closest I came was the code below, including suggestions from ReSharper. Here's my FirstView.xaml.cs:
using Cirrious.MvvmCross.Binding.BindingContext;
using Whatever.ViewModels;
namespace Whatever {
// inheriting from IMvxBindingContextOwner was suggested by ReSharper also
public partial class FirstView : BaseView, IMvxBindingContextOwner {
public class MyBindableMediaElement
{
private string _theMediaSource = "whatever";
public string TheMediaSource
{
get
{
return _theMediaSource;
}
set
{
_theMediaSource = value;
}
}
}
public FirstView()
{
InitializeComponent();
_mediaElement = new MyBindableMediaElement(this.theMediaElement);
var set = this.CreateBindingSet<FirstView, FirstViewModel>();
// the corresponding view model has a .SongToPlay property with get/set defined
set.Bind(_mediaElement).For(v => v.TheMediaSource).To(vm => vm.SongToPlay);
set.Apply();
}
public IMvxBindingContext BindingContext { get; set; } // this was suggested by ReSharper
}
I get a NullReferenceException in MvxBaseFluentBindingDescription.cs as soon as the view is created. The exact location is below:
protected static string TargetPropertyName(Expression<Func<TTarget, object>> targetPropertyPath)
{
var parser = MvxBindingSingletonCache.Instance.PropertyExpressionParser; // <----- exception here**
var targetPropertyName = parser.Parse(targetPropertyPath).Print();
return targetPropertyName;
}
I have not seen a working example of creating a binding set on a Windows Phone emulator. Has anyone gotten this to work? Thanks.
I can confirm that the narrator said that remark a little too flippantly without actually thinking about how he might do it...
However, with a little effort, you definitely can get the CreateBindingSet to work in Windows if you want to.
Before you start, do consider some alternatives - in particular, I suspect most people will use either Windows DependencyProperty binding or some hand-crafted code-behind with a PropertyChanged event subscription.
If you do want to add CreateBindingSet code to a Windows project then:
Add the Binding and BindingEx assemblies to your Ui project - the easiest way to do this is using nuget to add the BindingEx package.
In your Setup class, override InitializeLastChance and use this opportunity to create a MvxWindowsBindingBuilder instance and to call DoRegistration on that builder. Both these first two steps are covered in the n=35 Tibet binding video - and it's this second step that will initialise the binding framework and help you get past your current 'NullReferenceException' (for the code, see BindMe.Store/Setup.cs)
In your view, you'll need to implement the IMvxBindingContextOwner interface and you'll need to ensure the binding context gets created. You should be able to do this as simply as BindingContext = new MvxBindingContext();
In your view, you'll need to make sure the binding context is given the same DataContext (view model) as the windows DataContext. For a Phone Page, the easiest way to do this is probably just to add BindingContext.DataContext = this.ViewModel; to the end of your phone page's OnNavigatedTo method. Both steps 3 and 4 could go in your BaseView if you intend to use Mvx Binding in other classes too.
With this done, you should be able to use the CreateBindingSet code - although do make sure that all binding is done after the new MvxBindingContext() has been created.
I've not got a windows machine with me right now so I'm afraid this answer code comes untested - please do post again if it does or doesn't work.
I can confirm it works almost perfectly; the only problem is, there are no defaults register, so one has to do the full binding like:
set.Bind(PageText).For(c => c.Text).To(vm => vm.Contents.PageText).OneTime();
to fix this, instead of registering MvxWindowsBindingBuilder, I am registering the following class. Note: I have just created this class, and needs testing.
public class UpdatedMvxWindowsBindingBuilder : MvxWindowsBindingBuilder
{
protected override void FillDefaultBindingNames(IMvxBindingNameRegistry registry)
{
base.FillDefaultBindingNames(registry);
registry.AddOrOverwrite(typeof(Button), "Command");
registry.AddOrOverwrite(typeof(HyperlinkButton), "Command");
//registry.AddOrOverwrite(typeof(UIBarButtonItem), "Clicked");
//registry.AddOrOverwrite(typeof(UISearchBar), "Text");
//registry.AddOrOverwrite(typeof(UITextField), "Text");
registry.AddOrOverwrite(typeof(TextBlock), "Text");
//registry.AddOrOverwrite(typeof(UILabel), "Text");
//registry.AddOrOverwrite(typeof(MvxCollectionViewSource), "ItemsSource");
//registry.AddOrOverwrite(typeof(MvxTableViewSource), "ItemsSource");
//registry.AddOrOverwrite(typeof(MvxImageView), "ImageUrl");
//registry.AddOrOverwrite(typeof(UIImageView), "Image");
//registry.AddOrOverwrite(typeof(UIDatePicker), "Date");
//registry.AddOrOverwrite(typeof(UISlider), "Value");
//registry.AddOrOverwrite(typeof(UISwitch), "On");
//registry.AddOrOverwrite(typeof(UIProgressView), "Progress");
//registry.AddOrOverwrite(typeof(IMvxImageHelper<UIImage>), "ImageUrl");
//registry.AddOrOverwrite(typeof(MvxImageViewLoader), "ImageUrl");
//if (_fillBindingNamesAction != null)
// _fillBindingNamesAction(registry);
}
}
This is a skeleton from Touch binding, and so far I have only updated three controls to test out (Button, HyperButton and TextBlock)

How to clone an object without knowing the exact type in AIR for iOS

I am writing an iOS game in Flash and I need a way to clone polymorphic objects.
I have BaseClass, SubClass1, SubClass2 (and so on...) and I need a clone() method in BaseClass, that will create a copy of the current object, without a conditional such as
var obj:BaseClass;
if(this is SubClass1) {
obj = new SubClass1();
}else if(this is SubClass2) {
obj = new SubClass2();
}else...
I need a way to create an object and create the exact bytes (yes, a shallow copy is enough for my purpose) of the object. I've looked at:
AS3 - Clone an object
As3 Copy object
http://actionscripthowto.com/how-to-clone-objects-in-as3/
But none seem to work. Probably not available in AIR 3.3 for iOS SDK. (they compile, but the code doesn't work in my case)
Is there any other way, or did anybody achieve to clone an object in AIR for iOS?
Thanks,
Can.
Bit-by-bit cloning cannot be done with ActionScript, unless your class only contains primitive values (i.e. a simple data structure). That's what the ByteArray approach you've linked to in this question's answer is used for - but when you're dealing with complex types, especially display objects, you'll soon come to the limits (as, I gather, you have already realized).
So this more or less leaves you with two options:
Create a new object and copy all of its fields and properties.
This is the way to go if you're going to need behavior and field values, and you didn't use any drawing methods (i.e., you can not copy vector graphics this way). Creating a new class instance without knowing its exact type can be done in a generalized way using reflections, getQualifiedClassName() and getDefinitionByName() will help you there, and if you need more than just the name, describeType(). This does have limits, too, though:private fields will not be available (they don't appear in the information provided by describeType()), and in order to not run into performance problems, you will have to use some sort of cacheing. Luckily, as3commons-reflect has already solved this, so implementing the rest of what you need for a fully functional shallow copy mechanism is not too complex.
Create a new instance like this:
var newObject:* = new Type.forInstance( myObject ).clazz();
Then iterate over all accessors, variables and dynamic properties and assign the old instance's values.
I have implemented a method like this myself, for an open source framework I am working on. You can download or fork it at github. There isn't any documentation yet, but its use is as simple as writing:
var myCopy:* = shallowCopy( myObject );
I also have a copy() method there, which creates a true deep copy. This, however, has not been tested with anything but data structures (albeit large ones), so use at your own risk ;)
Create a bitmap copy.
If you do have vector graphics in place, this is often easier than recreating an image: Simply draw the content of the object's graphics to a new Bitmap.
function bitmapCopy( source:Sprite ):Bitmap {
source.cacheAsBitmap = true;
var bitmapData:BitmapData = new BitmapData( source.width, source.height, true, 0xFFFFFF );
bitmapData.draw( source, new Matrix(), null, null, null, true );
return new Bitmap( bitmapData, PixelSnapping.AUTO, true );
}
You need to create an abstract clone method in the base class and implement it for each subclass. In the specific implementations, you would copy all of the properties of the object to the new one.
public class BaseClass {
public function clone():BaseClass
{
// throw an error so you quickly see the places where you forgot to override it
throw new Error("clone() should be overridden in subclasses!");
return null;
}
}
public class Subclass1 extends BaseClass {
public override function clone():BaseClass
{
var copy:Subclass1 = new Subclass1();
copy.prop1 = prop1;
copy.prop2 = prop2;
// .. etc
return copy;
}
}
If you wanted to create a generic default implementation of clone, you could use describeType to access the properties and copy them over:
public function clone():BaseClass
{
var defn:XML = describeType(this);
var clsName:String = defn.#name;
var cls:Class = getDefinitionByName(clsName) as Class;
var inst:* = new cls();
for each(var prop:String in (defn.variable + defn.accessor.(#access == 'readwrite')).#name )
{
inst[prop] = this[prop];
}
return inst;
}
The main issue with this is that the describeType XML can get quite large - especially if you are dealing with objects that extend DisplayObject. That could use a lot of memory and be slow on iOS.