Detect intersection without causing bodies to collide - cocos2d-x

I want to detect the intersection of two objects (sprites) in my scene. I don't want the object geometric intersection to cause a collision between the bodies in the scene.
I've created PhysicalBody for both of my object shapes, but I can't find a way to detect the intersection without having both bodies hit each other on impact.
I'm using cocos2d-x 3+ with the default chipmunk engine (which I'd like to stick with for now)
The question is, how do I detect the intersection of elements without having them physically push each other when they intersect.

The answer is very simple (Though it took me 2 days to figure it out)
When contact is detected and onContactBegin() is called, when the relevant shape is being hit returning false will stop the physical interaction.

Related

Get closest mesh point to actor?

My player has a collision sphere to detect any static mesh that gets close to it.
I need to find the closest point on the static meshes that are colliding with it.
I think I could use "Get Actor Bounds" to get the mesh boundaries and then use them to find the closest point but I'm not sure how to do it.
I also thought about using a trace but I would need to cast many of them in order to find the right one, and I would need a way to make the trace hit only the meshes I care about.
Right now I'm simply using the "Get Actor Location" but that gives me the center of the static mesh.
How should I approach the problem?
The straight forward method to get a closest point, is to compare the distance for each vertex and your point.
A simple for loop and a minimum test of the distance.
Accessing mesh vertices can be a bit tricky in Unreal especially for StaticMesh. Because vertices are stored in GPU and so you have to make huge conversions. And I don't recommend to iterate over vertices if you want a real time game.
To avoid iterating over every mesh vertex, you could also check for the function :
https://docs.unrealengine.com/4.26/en-US/BlueprintAPI/Collision/GetClosestPointonCollision/
By the way you could use a multiple trace with a bug sphere and iterate over every collider location. But I am not sure if the location of the break hit result is always the closest of the object.

How to do detections in YOLOv5 only in a region of interest?

I want to detect objects only in a specified region and ignore all the other detections outside the ROI.
If I understand your question correctly, you want to detect objects which are present on the road surface.
One way to do that would be to first detect road surface (maybe by detecting lane markings: https://github.com/amusi/awesome-lane-detection) or using road free space detection models: https://github.com/fabvio/ld-lsi/) and then either feed only that part to your YOLOv5 to detect objects, or feed the complete image to it and later on filter your detected objects based on whether they are present on the road surface ( i.e. the bounding box of object overlaps with the road surface). If yes, you keep them otherwise ignore them.

AS3 Bullet hitscan

Take a look at this scenario, I have two characters, one shoots two bullets on the direction of the other character, the bullets are fired instantly and travel at infinity speed, how to detect a collision?
Here's an image to illustrate the problem:
The red bullet would clearly miss, but the green bullet would hit, how to perform this kind of collision test?
This type of collision test is called ray casting. Its implementations can vary from simple to very complex, depending on your specific application and how much time you're willing to invest into performance gains. Definitely search online for the topic if you're interested, or pick up a game programming book. It's a common operation for 3d games.
If you know that there will only ever be 2 bullets, then you can solve this with just a distance check between the ray created by the fired bullet and the other bullet. If the distance is less than the summed radius of the bullets then you know they've hit.
If you're making some sort of game engine where many bullets will be moving, then the simplest way that I can think of accomplishing this is to move the bullet along the ray that it is fired from (by normalizing the bullet's movement vector) in small increments (no larger than the bullet's radius) and perform collision checks at each step.
No matter what ray casting method you end up using, it will be tightly integrated with whatever system you're using for spacial partitioning. There's no way to avoid querying many spacial locations when you're ray casting, so be sure that you use an effective space partitioning system for your purposes.

OpenGL Newbie - Best way to move objects about in a scene

I'm new to OpenGL and graphics programming in general, though I've always been interested in the topic so have a grounding in the theory.
What I'd like to do is create a scene in which a set of objects move about. Specifically, they're robotic soccer players on a field. The objects are:
The lighting, field and goals, which don't change
The ball, which is a single mesh which will undergo translation and rotation but not scaling
The players, which are each composed of body parts, each of which are translated and rotated to give the illusion of a connected body
So to my GL novice mind, I'd like to load these objects into the scene and then just move them about. No properties of the vertices will change, either their positioning nor texture/normals/etc. Just the transformation of their 'parent' object as a whole.
Furthermore, the players all have identical bodies. Can I optimise somehow by loading the model into memory once, then painting it multiple times with a different transformation matrix each time?
I'm currently playing with OpenTK which is a lightweight wrapper on top of OpenGL libraries.
So a helpful answer to this question would either be:
What parts of OpenGL give me what I need? Do I have to redraw all the faces every frame? Just those that move? Can I just update some transformation matrices? How simple can I make this using OpenTK? What would psuedocode look like? Or,
Is there a better framework that's free (ideally open source) and provides this level of abstraction?
Note that I require any solution to run in .NET across multiple platforms.
Using so called vertex arrays is probably the surest way to optimize such a scene. Here's a good tutorial:
http://www.songho.ca/opengl/gl_vertexarray.html
A vertex array or more generally, a gl data array holds data like vertex positions, normals, colors. You can also have an array that hold indexes to these buffers to indicate in which order to draw them.
Then you have a few closely related functions which manage these arrays, allocate them, set data to them and paint them. You can perform a rendering of a complex mesh with just a single OpenGL command like glDrawElements()
These arrays generally reside on the host memory, A further optimization is to use vertex buffer objects which are the same concept as regular arrays but reside on the GPU memory and can be somewhat faster. Here's abit about that:
http://www.songho.ca/opengl/gl_vbo.html
Working with buffers as opposed to good old glBegin() .. glEnd() has the advantage of being compatible with OpenGL ES. in OpenGL ES, arrays and buffers are the only way to draw stuff.
--- EDIT
Moving things, rotating them and transforming them in the scene is done using the Model View matrix and does not require any changes to the mesh data. To illustrate:
you have your initialization:
void initGL() {
// create set of arrays to draw a player
// set data in them
// create set of arrays for ball
// set data in them
}
void drawScene {
glMatrixMode(GL_MODEL_VIEW);
glLoadIdentity();
// set up view transformation
gluLookAt(...);
drawPlayingField();
glPushMatrix();
glTranslate( player 1 position );
drawPlayer();
glPopMatrix();
glPushMatrix();
glTranslate( player 2 position );
drawPlayer();
glPopMatrix();
glPushMatix();
glTranslate( ball position );
glRotate( ball rotation );
drawBall();
glPopMatrix();
}
Since you are beginning, I suggest sticking to immediate mode rendering and getting that to work first. If you get more comfortable, you can improve to vertex arrays. If you get even more comfortable, VBOs. And finally, if you get super comfortable, instancing which is the fastest possible solution for your case (no deformations, only whole object transformations).
Unless you're trying to implement something like Fifa 2009, it's best to stick to the simple methods until you have a demonstrable efficiency problem. No need to give yourself headaches prematurely.
For whole object transformations, you typically transform the model view matrix.
glPushMatrix();
// do gl transforms here and render your object
glPopMatrix();
For loading objects, you'll even need to come up with some format or implement something that can load mesh formats (obj is one of the easiest formats to support). There are high-level libraries to simplify this but I recommend going with OpenGL for the experience and control that you'll have.
I'd hoped the OpenGL API might be easy to navigate via the IDE support (intellisense and such). After a few hours it became apparent that some ground rules need to be established. So I stopped typing and RTFM.
http://www.glprogramming.com/red/
Best advice I could give to anyone else who finds this question when finding their OpenGL footing. A long read, but empowering.

AS3: How to access pixel data efficiently?

I'm working a game.
The game requires entities to analyse an image and head towards pixels with specific properties (high red channel, etc.)
I've looked into Pixel Bender, but this only seems useful for writing new colors to the image. At the moment, even at a low resolution (200x200) just one entity scanning the image slows to 1-2 Frames/second.
I'm embedding the image and instance it as a Bitmap as a child of the stage. The 1-2 FPS situation is using BitmapData.getPixel() (on each pixel) with a distance calculation beforehand.
I'm wondering if there's any way I can do this more efficiently... My first thought was some sort of spatial partioning coupled with splitting the image up into many smaller pieces.
I also feel like Pixel Bender should be able to help somehow, however I've had little experience with it.
Cheers for any help.
Jonathan
Let us call the pixels which entities head towards "attractors" because they attract the entities.
You describe a low frame rate due to scanning for attractors. This indicates that you may possibly be scanning an image at every frame. You don't specify whether the image scanned is static or changes as frequently as, e.g., a video input. If the image is changing with every frame, so that you must re-calculate attractors somehow, then what you are attempting is real-time computer vision with the ABC Virtual Machine, please see below.
If you have an unchanging image, then the most important optimization you can make is to scan the image one time only, then save a summary (or "memoization") of the locations of the attractors. At each rendering frame, rather than scan the entire image, you can search the list or array of known attractors. When the user causes the image to change, you can recalculate from scratch, or update your calculations incrementally -- as you see fit.
If you are attempting to do real-time computer vision with ActionScript 3, I suggest you look at the new vector types of Flash 10.1 and also that you look into using either abcsx to write ABC assembly code, or use Adobe's Alchemy to compile C onto the Flash runtime. ABC is the byte code of Flash. In other words, reconsider the use of AS3 for real-time computer vision.
BitmapData has a getPixels method (notice it's plural). It returns a byte array of all the pixels which can be iterated much faster than a for loop with a call to getPixel inside, nested inside another for loop . Unfortunately, bytearrays are, as their name implies, 1 dimensional arrays of bytes, so iterating each pixel(4 bytes) requires using a for loop, not a foreach loop. You can access each pixel's color channel individually by default, but this sounds like what you want (find pixels with a "high red channel"), so you won't have to bitwise-and each pixel value to isolate a particular channel.
I read somewhere that getPixel is very slow, so that's where I figured you'd save the most. I could be wrong, so it'd be worth timing it.
I would say Heath Hunnicutt's anwser is a good one. If the image doesnt change just store all the color values in a vector. or byteArray of whatever and use it as a lookup table so you don't need to call getPixel() every frame.