Take a look at this scenario, I have two characters, one shoots two bullets on the direction of the other character, the bullets are fired instantly and travel at infinity speed, how to detect a collision?
Here's an image to illustrate the problem:
The red bullet would clearly miss, but the green bullet would hit, how to perform this kind of collision test?
This type of collision test is called ray casting. Its implementations can vary from simple to very complex, depending on your specific application and how much time you're willing to invest into performance gains. Definitely search online for the topic if you're interested, or pick up a game programming book. It's a common operation for 3d games.
If you know that there will only ever be 2 bullets, then you can solve this with just a distance check between the ray created by the fired bullet and the other bullet. If the distance is less than the summed radius of the bullets then you know they've hit.
If you're making some sort of game engine where many bullets will be moving, then the simplest way that I can think of accomplishing this is to move the bullet along the ray that it is fired from (by normalizing the bullet's movement vector) in small increments (no larger than the bullet's radius) and perform collision checks at each step.
No matter what ray casting method you end up using, it will be tightly integrated with whatever system you're using for spacial partitioning. There's no way to avoid querying many spacial locations when you're ray casting, so be sure that you use an effective space partitioning system for your purposes.
Related
Currently, for detection (localisation + recognition tasks) we use mainly deep learning algorithm in computer vision. Two types of detector exist :
one stage : SSD, YOLO, retinanet, ...
two stage : RCNN, Fast RCNN and faster RCNN for example
Using these detectors on very small objects (10 pixels for example) is a very challenging tasks and it seems the one stage algorithm are worse than the two stage algorithm. But I do not really understand why it works better on Faster RCNN for example. In fact, the one and two stage detector use both of them the anchor concept, and most of them use the same backbone like VGG16 or resnet50/resnet101. That means the receptive fields is the same. For example, I tried to detect very small object on retinanet and on faster RCNN. On retinanet, small object are not detected contrary to faster rcnn. I do not understand why. What is the explication theoretically ? (same backbone : resnet50)
I think in general networks like retinaNet are trying to bridge the gap you mention.Usually in one stage networks we will have anchor boxes of varying scales in the feature maps produced by the Backbone net, These feature maps are produced by heavily down sampling the input image, A lot of information about small object might be lost while performing this operation.While this is the case with one stage detectors, In two stage detectors because of flexibility of the RPN network, The RPN network may still propose regions which are small and this may help it to perform slightly better than its one stage counterparts.
I don't think you should be very surprised that both of these might use the same backbone, After the conv features are extracted both networks use different methods to perform detection.
Hope this helps, Let me know if i wasn't clear enough,or you have questions.
I'm simulating rope to rope collision in Bullet and I'd like it to be as accurate as possible. I don't need the simulation to be real-time. Rope consists of rigid bodies connected using constraints(e.g. btConeTwistConstraint).
Which settings do I need to tweak for simulation to become more realistic and accurate?
From my experiments decreasing the 3rd parameter to 1/300 gives additional accuracy.
gDynamicsWorld->stepSimulation(SIMULATION_STEP_TIME, 1, 1.0f/60.0f);
Also increasing solver iterations count helps a bit:
btContactSolverInfo& info = dynamicsWorld->getSolverInfo();
info.m_numIterations = 50;
There are videos where people simulate thousands of bodies in Blender. I'd like to achieve similar effect in a C++ app.
I have a closed surface mesh generated using Meshlab from point clouds. I need to get a volume mesh for that so that it is not a hollow object. I can't figure it out. I need to get an *.stl file for printing. Can anyone help me to get a volume mesh? (I would prefer an easy solution rather than a complex algorithm).
Given an oriented watertight surface mesh, an oracle function can be derived that determines whether a query line segment intersects the surface (and where): shoot a ray from one end-point and use the even-odd rule (after having spatially indexed the faces of the mesh).
Volumetric meshing algorithms can then be applied using this oracle function to tessellate the interior, typically variants of Marching Cubes or Delaunay-based approaches (see 3D Surface Mesh Generation in the CGAL documentation). The initial surface will however not be exactly preserved.
To my knowledge, MeshLab supports only surface meshes, so it is unlikely to provide a ready-to-use filter for this. Volume mesher packages should however offer this functionality (e.g. TetGen).
The question is not perfectly clear. I try to give a different interpretation. According to your last sentence:
I need to get an *.stl file for printing
It means that you need a 3D model that is ok for being fabricated using a 3D printer, i.e. you need a watertight mesh. A watertight mesh is a mesh that define in a unambiguous way the interior of a volume and corresponds to a mesh that is closed (no boundary), 2-manifold (mainly that each edge is shared exactly by two face), and without self intersections.
MeshLab provide tools for both visualizing boundaries, non manifold and self-intersection. Correcting them is possible in many different ways (deletion of non manifoldness and hole filling or drastic remeshing).
I want to detect the intersection of two objects (sprites) in my scene. I don't want the object geometric intersection to cause a collision between the bodies in the scene.
I've created PhysicalBody for both of my object shapes, but I can't find a way to detect the intersection without having both bodies hit each other on impact.
I'm using cocos2d-x 3+ with the default chipmunk engine (which I'd like to stick with for now)
The question is, how do I detect the intersection of elements without having them physically push each other when they intersect.
The answer is very simple (Though it took me 2 days to figure it out)
When contact is detected and onContactBegin() is called, when the relevant shape is being hit returning false will stop the physical interaction.
I want to implement a gameplay recording feature in a project, which would only record player input and seed of the RNG at the beginning of the level. Then I could take such record and play it on my computer in order to test it for validity.
I'm only concerned with some numerical differences which might appear between different Flash Player version, Operating Systems or CPUs (or whatever else that might be affected). The project would be written for Flash Player 10.0.0+. What stuff I am concerned with:
Operations on Numbers: Multiplying, dividing; bit operations (possibly bit shifting too); addition and subtraction; modulo
Math class: sin, cos and atan2; rounding
localToGlobal/globalToLocal with rotations and scaling
I won't be using stuff like hitTest, getObjectsUnderPoint, hitTestPoint, getBounds and so on, all collisions will be geometrical.
So, are there any chances that using any of the pointed things above will yield different results on different systems? If so, what can I do to avoid them?
That's an interesting question...
It's not a "will this game play the same on multiple platforms", it's "will a recording of user inputs produce the exact same output when simulated" question.
My gut would say "don't worry about it the flash VM abstracts the differences away", but then as I think more, there are some areas that might be a problem.
First, I wouldn't record anything time-based. A user hitting a key at 1.21 seconds in might be tough to predict whether that happens before or after a frame's worth of computation, especially if either the recording or playback computer was under load. Trying to time tweens with user input is probably a recipe for failure.
Accuracy of floating point should be ok. The algorithms that define when to round are well documented in IEEE-754, and all VM's use 64 bit Numbers regardless of OS they're running on. I'm guessing the math operations are equally understood.
I think it's good to avoid hitTest and whatnot. I imagine they theoretically could be influenced by whether or not hardware acceleration is being used. But I'm not an expert there, so maybe not.
Now localToGlobal/globalToLocal... I just don't know. They might have that theoretical hardware acceleration problem, but I tend to doubt it.
So I guess I didn't give any real answers.
Trig functions WILL NOT WORK! You must create custom implementations of the following: acos, asin, atan, atan2, cos, exp, log, pow, sin, and sqrt. And obviously, random().
I'm still in the process of testing the Number class. I can't say for sure whether additon/subtraction/etc. will be consistent on every machine.
It is very unlikely (although possible) that things will behave in a noticeably different way on different computers. Even if they did, it would be a very rare event and not something I would recommend worrying about unless it is absolutely crucial to gameplay.