Detect touch point whether belongs to clipped area of Layer cocos2d-x - cocos2d-x

I need to add a translucent guide layer above my UI, details are as follows:
there is one (or more) area (which I suggest the user to click) of the layer, and it's totally transparent and transmit touches to the UI below.
At the same time, the rest part of the layer, which is translucent, swallows touches.
I use a LayerColor clipped by ClippingNode to implement the guide layer, and I have a EventListenerTouchOneByOne (setSwallowTouches(true)) to detect touch, and then, in
bool touchBegan(cocos2d::Touch *touch, cocos2d::Event *event) {
// return (whether touch point belongs to translucent area).
}
So is there any way to judge whether a point belongs to the clipped area? Thanks.
p.s. Since the shape of clipped area is irregular, the way by checking whether the boundingBox of the stencil contains the touch point may not be acceptable.
p.p.s I've tried the way of judging pixel value of my LayerColor following methods such as Getting RGBA value of a pixel in a CCSprite, but failed to get values, someone says that the methods applies no more for cocos2d-x 3.x? Besides, I wonder will the pixel value of the LayerColor really change after being clipped?
Thanks again :D

Related

Transparency issues with 3d particles and 3d models, libgdx

I got some strange issues with transparency and 3d particles. A short vid to illustrate:
https://youtu.be/ZHKI1X3MjhY
As you can see I have a 3d particle effect, fire burning. Inside it is a 3 model with no alpha blending and it shows just fine. then in the far distance there is a small skeleton (with blending and alphatest turned on) and it also shows just fine through the fire. Then I turn camera and look at the warrior skeleton and it just disappear and instead you see what is behind him. I turn camera again and the mage skeleton also vanishes, but you can see the trees a bit further away just fine and they have the exact same settings for blending and alpha test. If I move the character say 20 yards away it also starts showing through the fire effect.
So it seems to have something to do with distance from the 3d particle effect...
The 3d particle batch is an extended BillboardParticleBatch like this:
protected Renderable allocRenderable(){
BlendingAttribute ba=new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE,1f);
Renderable r = super.allocRenderable();
r.material = new Material( ba,
// new DepthTestAttribute(GL20.GL_LEQUAL, 0.0f, 0.5f, true),
// r.material.set(new FloatAttribute(FloatAttribute.AlphaTest, 0.0f),
TextureAttribute.createDiffuse(texture));
return r;
}
All the characters and the trees are created with following attributes:
if (alpha) {
FloatAttribute floatAttribute = new FloatAttribute(FloatAttribute.AlphaTest, 0.5f);
BlendingAttribute blendingAttribute = new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA, 1f);
for (int i = 0; i < bulletEntity.modelInstance.materials.size; i++){
bulletEntity.modelInstance.materials.get(i).set(blendingAttribute);
bulletEntity.modelInstance.materials.get(i).set(floatAttribute);
}
}
The models are drawn first then the particles, I tried changing order but no difference. I have tried a lot of different setups for alphatest, depthtest and blendingattribute but can not find anything that works.
EDIT
I removed the Blending attribute from the 3d-models and now it looks as it should regarding the particle effect. However I need most materials on my character models to have blending set..
Anyone got any clue why this is happening when I enable blending?
I also tried to use the BillboardParticleBatch without extending it in case I had done something there but the effect then is even worse. All models with blending enabled appear in-front of the particle effect even though they stand behind it.
ModelBatch sorts your render calls (check this link, really, it is a must read), to avoid incorrect behavior (as you're experiencing). The actual sorting/rendering happens at the call to ModelBatch#end. By default it uses the DefaultRenderableSorter, which is a default implementation. Of course, because that implementation isn't aware of your scene, it might not fit exactly your needs.
The DefaultRenderableSorter tries to guess the location of each model based on their transformation matrix. Based on that location and the camera's location it will sort them so that:
First all opaque objects are rendered from front to back (because whatever is behind an opaque object isn't visible anyway, so that reduces unneeded calls to the fragment shader).
Secondly all transparent objects are rendered from back to front (because as soon as a transparent object is rendered then everything that is rendered after that and is behind it, will not be visible).
To decide whether an object is transparent, the BlendingAttribute#blended member is used. (So you could, if you really wanted to, set that member to false to force it to be treated (sorted) as if it was opaque)
So, the order in which you call ModelBatch#render is not necessarily the order in which they are actually executed. If you want to force to render whatever you've added to the batch in between, then call the ModelBatch#flush(). Of course, doing this a lot defeats some of the purpose of ModelBatch in the first place.
Instead you could implement your own RenderableSorter which has more knowledge about your scene and can therefor do a better job sorting than the default implementation. (however if flush() works for you and there's no other issue, then just flush might be the easiest solution for you).
That said, there a various other solutions you could try as well. E.g. the regions of the particles are fully transparent, so the fragment shader might as well discard those all together. Try adding FloatAttribute.AlphaTest with a value of 0.5f to the particles. If that messes with your blending then gradually reduce the value to e.g. 0.05f.
Also, you could add a DepthTestAttribute with depthMask set to false (new DepthTestAttribute(false)). This will prevent the particles from writing to the depth buffer. (but also might cause other things to show in front of the particles).

contact listener for box2d not working properly

I have two bodies. One circle with a ball inside and one bird with a polygon. I am trying to detect collision between the sprites within the bodies and not the bodies themselves as in the code snippet below.
#Override
public void beginContact(Contact contact) {
Body a = contact.getFixtureA().getBody();
Body b = contact.getFixtureB().getBody();
if(contact.isTouching()){
System.out.println(contact.isTouching());
if (a.getUserData() == Constants.Enemy || b.getUserData() == Constants.Enemy) {
System.out.println("yes");
}
}
}
the method above prints out "yes" when the bodies are in a stage as on the picture below which is not right because the sprites have not touched with each other. Any ideas?
there is probably another way of doing, but if I wanted more accurately, would use this tool -> http://www.aurelienribon.com/blog/projects/physics-body-editor/
If you experience any initial error you can look at these questions might be errors with charger ->
Physics Body Editor error
or this BodyEditorLoader - noSuchMethod in this response, public use the charger that works well for me in libgdx (1.5.x)
I hope to help
Update:
you said: "thanks for this but I am not sure whether this will help me in my case. "
Box2d initially assuming that knows nothing of your sprite, position or anything. He "box2d" just knows fixtures ect. If your sprite does not match the size of the fixture, does not know, is not malfunctioning, but you expect something different.
So using the tool, I said, you can adjust the fixture friendlier way to the shape of sprite.
This is an image simulating as could be, the fixture is an image in Gimp, it's just to see the idea:
Like Angel Angel said,
Box2D has bounding volumes to detect collisions, which are not pixel perfect and do not know anything about the Sprite itself. This is for a reason of performance, since collision detecting has a huge impact on performance.
The solution is to make the bounding box more accurate. You can use PolygonsShapes or make your bounding rect smaller.
In your case i would consider using a PolygonShape.

As3, OOP strategy for custom class DrawVectorLineArt

I am doing some math projects that require a lot of vector line art--that is a line drawn between to points with a circle at the start point and an arrow at the end point. A call to Math.atan2() keeps the arrow aligned. I call this class DrawVectorLineArt() and it creates instances of 2 other custom classes DrawArrow() and DrawCircle().
So far so good--DrawVectorLineArt() draws just what I need. Now I need to animate the vector art.
So in a function onEnterFrame I want to update the postion of arrow and circle, the objects created by DrawArrow() and DrawCircle(), respectively. I also need to clear and redraw the line drawn between them. At this point I am not sure how to proceed in an OOP framework. Do I need to create methods of my custorm class DrawVectorLineArt() to update the position of arrow and circle and subsequently clear and redraw the connecting line?
Any advice or links appreciated. Thanks!
"Do I need to create methods of my custorm class DrawVectorLineArt() to update the position of arrow and circle and subsequently clear and redraw the connecting line?"
Yes.
The arrow and the circle are very members of DrawVectorLineArt, and going by its name and choice of members, so should the line (if it's implemented through actual data). DrawVectorLineArt should contain and implement the whole animation between the circle, arrow, and line. As such, if the animation's supposed to be able to change after creation, the same instance of DrawVectorLineArt should be able to take any two legitimate points supplied to it (or that it becomes aware of internally, depending on what you're doing), reposition the three components, and turn the arrow and line appropriately, within its own code.

Has anyone experienced side effects (including performance issue) of using getObjectsUnderPoint?

Before I go making major change in my ongoing game project, I just want to hear from others if anyone has found any issues with getObjectsUnderPoint() function of the DisplayObject?
Update:
Not just the performance issue but any other limitations of using it (like it doesn't detect certain type of UIelements (just as example))
I will have three layers in my application (which an Isometric game)
Background -- This is just a background which stays in the bottom, has nothing to do with game
Middle Layer -- This is the playable area, Here all my game elements will be placed on this layer
Top Layer -- This is one dummy transparent layer covers entire playable area which interrupts all the mouse events. This is where I want to use the getObjectsUnderPoint()
So, player wants to click on the element, the top layer will interrupt the mouseevent and then check if there is something placed or just a plain background and take appropriate action like, notify the underneath object.
This really doesn't require to be done this way because I could simply add moues events for all those items placed on the map directly but because I would be using getObjectsUnderPoint() anyway to check if there is anything beneath the item.
If anyone can explain how this function works then it would be little easy for me to make a decision.
There was one annoying problem though. I don't know if they fixed it or not. At least it was there in 10.1 times.
If you have a container and you scaled it container.getObjectsUnderPoint will return wrong result. All the time. So everywhere where I needed getObjectsUnderPoint I had to call it from stage to get proper result.
It's an incomplete function. It returns graphical objects under the mouse, NOT all potential mouse targets for event or interaction purposes. It actually requires complex logic to examine the array returned by getObjectsUnderPoint to determine the mouse target, because the appropriate target (the one Flash would choose if you actually clicked that point) may not be in the list.
First you'd have to examine the object array in reverse, since the items are ordered back to front. You'd have to examine each object's entire parent chain, looking for a parent with mouseChildren = false that would cause it to intercept the event and become the target. Whether or not such an object is found, this final object you arrive at must have its mouseEnabled property set to true, otherwise you must skip it and move on to the next object in the array, which would be, for example, the next sprite or shape behind the one you initially checked. While going through the list, you must notice when the parent changes, at which point you need to assume that all children of that common parent had their mouseEnabled property set to false, in which case the parent would become the next candidate. This is actually extremely complicated, because you're working backwards in a bottom-up approach with an incomplete set of objects that was generated from the top-down.
To get actual potential mouse event targets, consistent with the default dispatching logic... it is actually easier to start from the stage in a top-down manner and walk backwards through the display hierarchy in a depth-first search, checking mouseChildren to determine whether you need to step into children, and checking mouseEnabled if it's to be a target, otherwise stepping into the container's children and repeating the process from back to front again. This is much more accurate, complete, and staightforward. The only problem is you have to code it yourself.

Drag objects in canvas

Im looking for an easy to use method of assigning drag behavior to multiple objects (images, shapes etc) in canvas. Does anyone have a good way or know of any libraries for dragging objects around? Thanks
Creating your own mouse events takes a little work - ideally you should either create or use some kind of mini-library. I'm thinking of creating something like this in the near future. Anyway, I created a drag and drop demo on jsFiddle showing how to drag images - you can view it here.
You can create draggable images like this:
var myImage = new DragImage(sourcePath, x, y);
Let me know if you have any questions about this. Hope it helps.
EDIT
There was a bug when dragging multiple images. Here is a new version.
Another thing you might want to check out is easeljs it sort of in the style of AS3... mouseEvents dragging etc...
The HTML Canvas—unlike SVG or HTML—uses a non-retained (or immediate) graphics API. This means that when you draw something (like an image) to the canvas no knowledge of that thing remains. The only thing left is pixels on the canvas, blended with all the previous pixels. You can't really drag a subset of pixels; for one thing, the pixels that were 'under' them are gone. What you would have to do is:
Track the mousedown event and see if it's in the 'right' location for dragging. (You'll have to keep track of what images/objects are where and perform mouse hit detection.)
As the user drags the mouse, redraw the entire canvas from scratch, drawing the image in a new location each time based on the offset between the current mouse location and the initial mousedown location.
Some alternatives that I might suggest:
SVG
Pure HTML
Multiple layered canvases, and drag one transparent canvas over another.
The HTML Canvas is good for a lot of things. User interaction with "elements" that appear to be distinct (but are not) is not one of those things.
Update: Here are some examples showing dragging on the canvas:
http://developer.yahoo.com/yui/examples/dragdrop/dd-region.html
http://www.redsquirrel.com/dave/work/interactivecanvas/
http://langexplr.blogspot.com/2008/11/using-canvas-html-element.html
None of these have created a separate library for tracking your shapes for you, however.
KineticJS is one such Javascript Library that u can use exclusively for animations
Heres the Link html5canvastutorials
Canvas and jCanvas
You're definitely gonna want to check out jCanvas. It's a super clean wrapper for Canvas, which kicks open a lot of doors without adding code complexity. It makes things like this a breeze.
For example, here's a little sandbox of something close to what you're after, with dragging and redrawing built right in:
Drawing an Arrow Between Two Elements.
I ventured down the road of doing everything with DIVs and jQuery but it always fell short on interactivity and quality.
Hope that helps others, like me.
JP
As you create new objects whether they are windows, cards, shapes or images to be draggable, you can store them in an array of "objects currently not selected". When you click on them or select them or start dragging them you can remove them from the array of "objects not selected". This way you can control what can move in the event of a particular mousedown event or mousemove event by checking if it isn't selected. If it is selected it will not be in the "not selected" array and you can move the mouse pointer over other shapes while dragging shapes without them becoming dragged.
Creating arrays of objects you would like to drag also helps with hierarchy. Canvas draws the pixels belonging to the foremost object last. So if the objects are in an array you simply switch their instance as in element in the array say from objectArray[20] to objectArray[4] as you iterate through the array and draw the objects stored in the array elements you can change whether other objects are seen on top or behind other objects.