Libgdx determinate camera can see object - libgdx

How is it possible? I want some method that determine that my 3d object can see my camera. So basically it is in the camera 2d viewport.

a libgdx camera has a frustum. So this should help you:
Frustum camFrustum = camera.frusum;
if (camFrustum. pointInFrustum(object.x, object.y, object.z)
|| camFrustum.pointInFrustum(object.x + object.width, object.y, object.z)
|| camFrustum.pointInFrustum(object.x + object.width, object.y + object. heigth, object.z)
|| camFrustum.pointInFrustum(object.x, object.y + object.height, object.z))
{
// Object is in viewport
}
In 2D the object.z should be set to 1 or something like that i think. Just try it. For 3D there are also other methods: sphereInFrustum, boundsInFrustum and maybe others.
This methods are used for Viewfrustum Culling, which means, that object, which you do not see don't get rendered and the GPU has less things to do.

Related

setCenter() Method is not properly centering sprite texture on box2d fixture

The past few days I've been trying to figure out a display bug I don't understand. I've been working on a simple 2d platformer with box2d and orthogonal Tiled maps. So far so good, the physics work and using the b2d debug renderer I can assert proper player fixture and camera movement through the level.
Now next step I've tried to load textures to display sprites instead of debug shapes. This is where I stumble. I can load animations for my player body/fixture, but when I use the setCenter() method to center the texture on the fixture it is always out of center.
I've tried approaches via halving texture witdths and heights hoping to center the texture on the player fixture but I get the exact same off position rendering. I've played aorund with world/camera/screen unit coordinates but the misalignement persists.
I'm creating the player in my Player class with the following code.
First I define the player in box2d:
//define player's physical behaviour
public void definePlayer() {
//definitions to later use in a body
BodyDef bdef = new BodyDef();
bdef.position.set(120 / Constants.PPM, 60 / Constants.PPM);
bdef.type = BodyDef.BodyType.DynamicBody;
b2body = world.createBody(bdef);
//Define needed components of the player's main fixture
FixtureDef fdef = new FixtureDef();
PolygonShape shape = new PolygonShape();
shape.setAsBox(8 / Constants.PPM, 16 / Constants.PPM); //size of the player hitbox
//set the player's category bit
fdef.filter.categoryBits = Constants.PLAYER_BIT;
//set which category bits the player should collide with. If not mentioned here, no collision occurrs
fdef.filter.maskBits = Constants.GROUND_BIT |
Constants.GEM_BIT |
Constants.BRICK_BIT |
Constants.OBJECT_BIT |
Constants.ENEMY_BIT |
Constants.TREASURE_CHEST_BIT |
Constants.ENEMY_HEAD_BIT |
Constants.ITEM_BIT;
fdef.shape = shape;
b2body.createFixture(fdef).setUserData(this);
}
Then I call the texture Region to be drawn in the Player class constructor:
//define in box2d
definePlayer();
//set initial values for the player's location, width and height, initial animation.
setBounds(0, 0, 64 / Constants.PPM, 64 / Constants.PPM);
setRegion(playerStand.getKeyFrame(stateTimer, true));
And finally, I update() my player:
public void update(float delta) {
//center position of the sprite on its body
// setPosition(b2body.getPosition().x - getWidth() / 2, b2body.getPosition().y - getHeight() / 2);
setCenter(b2body.getPosition().x, b2body.getPosition().y);
setRegion(getFrame(delta));
//set all the boolean flags during update cycles approprietly. DO NOT manipulate b2bodies
//while the simulation happens! therefore, only set flags there, and call the appropriate
//methods outside the simulation step during update
checkForPitfall();
checkIfAttacking();
}
And my result is
this, facing right
and this, facing left
Update:
I've been trying to just run
setCenter(b2body.getPosition().x, b2body.getPosition().y);
as suggested, and I got the following result:
facing right and facing left.
The sprite texture flip code is as follows:
if((b2body.getLinearVelocity().x < 0 || !runningRight) && !region.isFlipX()) {
region.flip(true, false);
runningRight = false;
} else if ((b2body.getLinearVelocity().x > 0 || runningRight) && region.isFlipX()) {
region.flip(true, false);
runningRight = true;
}
I'm testing if either the boolean flag for facing right is set or the x-axis velocity of my player b2body has a positive/negative value and if my texture region is already flipped or not and then use libGDX's flip() accordingly. I should not be messing with fixture coords anywhere here, hence my confusion.
The coordinates of box2d fixtures are offsets from the position, the position isn't necessarily the center (although it could be depending on your shape definition offsets). So in your case i think the position is actually the lower left point of the box2d polygon shape.
In which case you don't need to adjust for width and height because sprites are also drawn from bottom left position. So all you need is ;
setPosition(b2body.getPosition().x , b2body.getPosition().y );
I'm guessing you flip the box2d body when the player looks left the position of the shape is now bottom right so the sprite offset of width/2 and height/2 is from the bottom right instead. So specifically when you are looking left you need an offset of
setPosition(b2body.getPosition().x - getWidth() , b2body.getPosition().y );
I think looking right will be fixed from this, but i don't know for sure how you handle looking left in terms of what you do to the body, but something is done because the offset changes entirely as shown in your capture. If you aren't doing some flipping you could add how you handle looking right to the question.
EDIT
It seems the answer was that the sprite wasn't centered in the sprite sheet and this additional space around the sprite caused the visual impression of being in the wrong place (see comments).

collision as3 dectect

hi im trying to make a basic game on adobe flash as3 to help learn collision dection, the aim is to make your way through traffic. the player (box_MC) has to make it to the other side and the other objects are in the way (cycle) which have collision detection. i did the collision detection by going into cycle movieclip and making other smaller cycles which if you hit creates collision.
the collision error lies with if the player moves down onto the cycle it doesnt run the collision
p.s if there is a better way of doing the collision how is it done?
hitTestObject() and hitTestPoint() aren't very good when it comes to collision detection, which is somewhat ironic since, of course, those are the first things most people look at when trying to implement collisions. However, I've found that simple math (like, really simple) combined with an equally simple while() loop is the best way to go about it.
What I do is:
// the stage collision box
var mStageRect:Rectangle = new Rectangle(/*stage collision box properties here*/);
// create a Point object that holds the location of the bottom center of the player
var mPlayerBase:Point = new Point(player.x + (player.width / 2), player.y + player.height);
// call this function every frame through your game loop (onEnterFrame)
private function checkCollision(e:Event):void
{
// while the player's bottom center point is inside of the stage...
while (rectContainsPoint(mStageRect, mPlayerBase))
{
// decrement the player's y
player.y--;
// set gravity to 0
player.gravity = 0;
// set isOnGround to true
player.isOnGround = true;
}
}
// checks if a point is currently positioned within the bounds of a rectangle object using ultra simple math
private function rectContainsPoint(rect:Rectangle, point:Point):Boolean
{
return point.x > rect.x && point.x < rect.x + rect.width && point.y > rect.y && point.y < rect.y + rect.height;
}
This is waaaaaaay more efficient than hitTestObject/Point, imo, and has given me no problems.

three.js - How to rotate a object with mouse (but not using camera)

I want to create a lot of objects and rotate each of them separately using the mouse. So far, I can select one of the objects with the mouse, but I can't use the mouse for an object rotation.
As the mouse only has mouse.x = event.clientX - windowHalfX;
mouse.y = event.clientY - windowHalfY;, I only know how to use the mousemove and mousedown event handlers to change the SELECTED.rotation.y and SELECTED.rotation.x (where SELECTED is the selected object) – how can I control the SELECTED.rotation.z too?
If the selected object is upside down, the x-rotation will also backwards which seems not very preferable. Is there any way to modify this?
Lots of the examples I found use camera rotation rather actually rotating the object. I'd like to find an solution that can rotate the object without a camera change.
You should check out the given control examples from three.js. They are not comfortable and mostly you need to copy them and modify like you want. But they take an object which is the controller this could be the camera or object. Here is a small example:
yawLeft = -((event['pageX'] - fullWidth) - halfWidth) / halfWidth;
pitchDown = ((event['pageY'] - fullHeight) - halfHeight) / halfHeight;
rotationVector.x = (-pitchUp + pitchDown);
rotationVector.y = (-yawRight + yawLeft);
rotationVector.z = (-rollRight + rollLeft);
var tmpQuaternion = new THREE.Quaternion();
tmpQuaternion.set(rotationVector.x * rotMult, rotationVector.y * rotMult, rotationVector.z * rotMult, 1).normalize();
object.quaternion.multiplySelf(tmpQuaternion);
object could be camera or model. It does not matter because camera is an THREE.Object3D also.

canvas isPointInPath does not work with ctx.drawImage()

I suppose this doesn't work because canvas is drawing a bitmap of a vector (and a bitmap is not a path).
Even if it did work, the bitmap is likely always has a rectangular permitter.
Is there any way to leverage something like isPointInPath when using drawImage?
example:
The top canvas is drawn using drawImage and isPointInPath does not work.
The bottom canvas is drawn using arc and isPointInPath works.
a link to my proof
** EDIT **
I draw a circle on one canvas, and use isPointInPath to see if the mouse pointer is inside the circle (bottom canvas in my example).
I also "copy" the bottom canvas to the top canvas using drawImage. Notice that isPointInPath will not work on the top canvas (most likely due to reasons I mentioned above). Is there a work-around I can use for this that will work for ANY kind of path (or bitmap)?
A canvas context has this hidden thing called the current path. ctx.beginPath, ctx.lineTo etc create this path.
When you call ctx.stroke() or ctx.fill() the canvas strokes or fills that path.
Even after it is stroked or filled, the path is still present in the context.
This path is the only thing that isPointInPath tests.
If you want to test if something is in an image you have drawn or a rectangle that was drawn with ctx.fillRect(), that is not possible using built in methods.
Typically you'd want to use a is-point-in-rectangle function that you write yourself (or get from someone else).
If you're looking for how to do pixel-perfect (instead of just the image rectangle) hit detection for an image there are various methods of doing that discussed here: Pixel perfect 2D mouse picking with Canvas
You could try reimplementing ctx.drawImage() to always draw a box behind the image itself, like so (JSFiddle example):
ctx.customDrawImage = function(image, x, y){
this.drawImage(image, x, y);
this.rect(x, y, image.width, image.height);
}
var img1 = new Image();
img1.onload = function(){
var x = y = 0;
ctx.drawImage(img1, x, y);
console.log(ctx.isPointInPath(x + 1, y + 1));
x = 1.25 * img1.width;
ctx.customDrawImage(img1, x, y);
console.log(ctx.isPointInPath(x + 1, y + 1));
Note: you might get side effects like the rectangle appearing over the image, or bleeding through from behind if you are not careful.
To me, isPointInPath failed after canvas was moved. So, I used:
mouseClientX -= gCanvasElement.offsetLeft;
mouseclientY -= gCanvasElement.offsetTop;
I had some more challenges, because my canvas element could be rescaled. So first when I draw the figures, in my case arc, I save them in an array together with a name and draw them:
if (this.coInit == false)
{
let co = new TempCO ();
co.name= sensor.Name;
co.path = new Path2D();
co.path.arc(c.X, c.Y, this.radius, 0, 2 * Math.PI);
this.coWithPath.push(co);
}
let coWP = this.coWithPath.find(c=>c.name == sensor.Name);
this.ctx.fillStyle = color;
this.ctx.fill(coWP.path);
Then in the mouse event, I loop over the items and check if the click event is in a path. But I also need to rescale the mouse coordinates according to the resized canvas:
getCursorPosition(event) {
const rect = this.ctx.canvas.getBoundingClientRect();
const x = ((event.clientX - rect.left ) / rect.width) * this.canvasWidth;
const y = ((event.clientY - rect.top) / rect.height) * this.canvasHeight;
this.coWithPath.forEach(c=>{
if (this.ctx.isPointInPath(c.path, x, y))
{
console.log("arc is hit", c);
//Switch light
}
});
}
So I get the current size of the canvas and rescale the point to the original size. Now it works!
This is how the TempCO looks like:
export class TempCO
{
path : Path2D;
name : string;
}

How to draw a 3D sphere?

I want to draw a 3D ball or sphere in HTML 5.0 canvas. I want to understand the Algorithm about how to draw a 3D sphere. Who can share this with me?
You will need to model a sphere, and have it be varying colors so that as it rotates you can see that it is not only a sphere, but being rendered.
Otherwise, a sphere in space, with not point of reference around it looks like a circle, if it is all one solid color.
To start with you will want to try drawing a circle with rectangles, as that is the main primitive you have.
Once you understand how to do that, or create a new primitive, such as a triangle, using the Path method, and create a circle, then you are ready to move it to 3D.
3D is just a trick, as you will take your model, probably generated by an equation, and then flatten it, as you determine which parts will be seen, and then display it.
But, you will want to change the color of the triangles based on how far they are from a source of light, as well as based on the angle of that part to the light source.
This is where you can start to do optimizations, as, if you do this pixel by pixel then you are raytracing. If you have larger blocks, and a point source of light, and the object is rotating but not moving around then you can recalculate how the color changes for each triangle, then it is just a matter of changing colors to simulate rotating.
The algorithm will depend on what simplifications you want to make, so as you gain experience come back and ask, showing what you have done so far.
Here is an example of doing it, and below I copied the 3D sphere part, but please look at the entire article.
function Sphere3D(radius) {
this.point = new Array();
this.color = "rgb(100,0,255)"
this.radius = (typeof(radius) == "undefined") ? 20.0 : radius;
this.radius = (typeof(radius) != "number") ? 20.0 : radius;
this.numberOfVertexes = 0;
// Loop from 0 to 360 degrees with a pitch of 10 degrees ...
for(alpha = 0; alpha <= 6.28; alpha += 0.17) {
p = this.point[this.numberOfVertexes] = new Point3D();
p.x = Math.cos(alpha) * this.radius;
p.y = 0;
p.z = Math.sin(alpha) * this.radius;
this.numberOfVertexes++;
}
// Loop from 0 to 90 degrees with a pitch of 10 degrees ...
// (direction = 1)
// Loop from 0 to 90 degrees with a pitch of 10 degrees ...
// (direction = -1)
for(var direction = 1; direction >= -1; direction -= 2) {
for(var beta = 0.17; beta < 1.445; beta += 0.17) {
var radius = Math.cos(beta) * this.radius;
var fixedY = Math.sin(beta) * this.radius * direction;
for(var alpha = 0; alpha < 6.28; alpha += 0.17) {
p = this.point[this.numberOfVertexes] = new Point3D();
p.x = Math.cos(alpha) * radius;
p.y = fixedY;
p.z = Math.sin(alpha) * radius;
this.numberOfVertexes++;
}
}
}
}
u can try with three.js library , which abstracts a lot of code from core webgl programming. Include three.js library in your html from three.js lib.
u can use canvas renderer for safari browser , webgl works for chrome
please find the JS FIDDLE FOR SPHERE
var camera, scene, material, mesh, geometry, renderer
function drawSphere() {
init();
animate();
}
function init() {
// camera
scene = new THREE.Scene()
camera = new THREE.PerspectiveCamera(50, window.innerWidth / innerHeight, 1, 1000);
camera.position.z = 300;
scene.add(camera);
// sphere object
var radius = 50,
segments = 10,
rings = 10;
geometry = new THREE.SphereGeometry(radius, segments, rings);
material = new THREE.MeshNormalMaterial({
color: 0x002288
});
mesh = new THREE.Mesh(geometry, material);
//scene
;
scene.add(mesh);
// renderer
renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
}
function animate() {
requestAnimationFrame(animate);
render();
}
function render() {
mesh.rotation.x += .01;
mesh.rotation.y += .02;
renderer.render(scene, camera);
}
// fn callin
drawSphere();
Update: This code is quite old and limited. There are libraries for doing 3D spheres now: http://techslides.com/d3-globe-with-canvas-webgl-and-three-js/
Over ten years ago I wrote a Java applet to render a textured sphere by actually doing the math to work out where the surface of the sphere was in the scene (not using triangles).
I've rewritten it in JavaScript for canvas and I've got a demo rendering the earth as a sphere:
(source: haslers.info)
I get around 22 fps on my machine. Which is about as fast as the Java version it was based on renders at, if not a little faster!
Now it's a long time since I wrote the Java code - and it was quite obtuse - so I don't really remember exactly how it works, I've just ported it JavaScript. However this is from a slow version of the code and I'm not sure if the faster version was due to optimisations in the Java methods I used to manipulate pixels or from speedups in the math it does to work out which pixel to render from the texture. I was also corresponding at the time with someone who had a similar applet that was much faster than mine but again I don't know if any of the speed improvements they had would be possible in JavaScript as it may have relied on Java libraries. (I never saw their code so I don't know how they did it.)
So it may be possible to improve on the speed. But this works well as a proof of concept.
I'll have a go at converting my faster version some time to see if I can get any speed improvements into the JavaScript version.
Well, an image of a sphere will always have a circular shape on your screen, so the only thing that matters is the shading. This will be determined by where you place your light source.
As for algorithms, ray tracing is the simplest, but also the slowest by far — so you probably wouldn't want to use it to do anything very complicated in a <CANVAS> (especially given the lack of graphics acceleration available in that environment), but it might be fast enough if you just wanted to do a single sphere.