change axis of rotation - autodesk-forge

I am taking help of sample code of Transform tutorial for rotation and position change. I am facing couple of problems.
I want to perform rotation on door and window. Currently axis of rotation pass through center. How can I change to rotate a object from axis passing through its side.
For position change > when I change position lets say for a window in any direction, it moves but it is visible on wall. I want it hide when window collide with wall.

1/ Here is a code snippet that illustrates how to rotate elements (fragments), for a more complete sample take a look at this article: Rotate Components Control for the Viewer
rotateFragments (model, fragIdsArray, axis, angle, center) {
var quaternion = new THREE.Quaternion()
quaternion.setFromAxisAngle(axis, angle)
fragIdsArray.forEach((fragId, idx) => {
var fragProxy = this.viewer.impl.getFragmentProxy(
model, fragId)
fragProxy.getAnimTransform()
var position = new THREE.Vector3(
fragProxy.position.x - center.x,
fragProxy.position.y - center.y,
fragProxy.position.z - center.z)
position.applyQuaternion(quaternion)
position.add(center)
fragProxy.position = position
fragProxy.quaternion.multiplyQuaternions(
quaternion, fragProxy.quaternion)
if (idx === 0) {
var euler = new THREE.Euler()
euler.setFromQuaternion(
fragProxy.quaternion, 0)
this.emit('rotate', {
dbIds: this.selection.dbIdArray,
fragIds: fragIdsArray,
rotation: euler,
model
})
}
fragProxy.updateAnimTransform()
})
}
2/ When you transform the geometry, you are just moving triangles around, there is no built-in logic that will hide components because they overlap, you will need to implement that yourself. You should be able to find Three.js code that computes if two meshes intersect (triangle-triangle intersection algorithm) and run that against the component you are moving and all walls that are around. Here is something that can put you on tracks: How to detect collision in three.js?
Hope that helps

Related

setCenter() Method is not properly centering sprite texture on box2d fixture

The past few days I've been trying to figure out a display bug I don't understand. I've been working on a simple 2d platformer with box2d and orthogonal Tiled maps. So far so good, the physics work and using the b2d debug renderer I can assert proper player fixture and camera movement through the level.
Now next step I've tried to load textures to display sprites instead of debug shapes. This is where I stumble. I can load animations for my player body/fixture, but when I use the setCenter() method to center the texture on the fixture it is always out of center.
I've tried approaches via halving texture witdths and heights hoping to center the texture on the player fixture but I get the exact same off position rendering. I've played aorund with world/camera/screen unit coordinates but the misalignement persists.
I'm creating the player in my Player class with the following code.
First I define the player in box2d:
//define player's physical behaviour
public void definePlayer() {
//definitions to later use in a body
BodyDef bdef = new BodyDef();
bdef.position.set(120 / Constants.PPM, 60 / Constants.PPM);
bdef.type = BodyDef.BodyType.DynamicBody;
b2body = world.createBody(bdef);
//Define needed components of the player's main fixture
FixtureDef fdef = new FixtureDef();
PolygonShape shape = new PolygonShape();
shape.setAsBox(8 / Constants.PPM, 16 / Constants.PPM); //size of the player hitbox
//set the player's category bit
fdef.filter.categoryBits = Constants.PLAYER_BIT;
//set which category bits the player should collide with. If not mentioned here, no collision occurrs
fdef.filter.maskBits = Constants.GROUND_BIT |
Constants.GEM_BIT |
Constants.BRICK_BIT |
Constants.OBJECT_BIT |
Constants.ENEMY_BIT |
Constants.TREASURE_CHEST_BIT |
Constants.ENEMY_HEAD_BIT |
Constants.ITEM_BIT;
fdef.shape = shape;
b2body.createFixture(fdef).setUserData(this);
}
Then I call the texture Region to be drawn in the Player class constructor:
//define in box2d
definePlayer();
//set initial values for the player's location, width and height, initial animation.
setBounds(0, 0, 64 / Constants.PPM, 64 / Constants.PPM);
setRegion(playerStand.getKeyFrame(stateTimer, true));
And finally, I update() my player:
public void update(float delta) {
//center position of the sprite on its body
// setPosition(b2body.getPosition().x - getWidth() / 2, b2body.getPosition().y - getHeight() / 2);
setCenter(b2body.getPosition().x, b2body.getPosition().y);
setRegion(getFrame(delta));
//set all the boolean flags during update cycles approprietly. DO NOT manipulate b2bodies
//while the simulation happens! therefore, only set flags there, and call the appropriate
//methods outside the simulation step during update
checkForPitfall();
checkIfAttacking();
}
And my result is
this, facing right
and this, facing left
Update:
I've been trying to just run
setCenter(b2body.getPosition().x, b2body.getPosition().y);
as suggested, and I got the following result:
facing right and facing left.
The sprite texture flip code is as follows:
if((b2body.getLinearVelocity().x < 0 || !runningRight) && !region.isFlipX()) {
region.flip(true, false);
runningRight = false;
} else if ((b2body.getLinearVelocity().x > 0 || runningRight) && region.isFlipX()) {
region.flip(true, false);
runningRight = true;
}
I'm testing if either the boolean flag for facing right is set or the x-axis velocity of my player b2body has a positive/negative value and if my texture region is already flipped or not and then use libGDX's flip() accordingly. I should not be messing with fixture coords anywhere here, hence my confusion.
The coordinates of box2d fixtures are offsets from the position, the position isn't necessarily the center (although it could be depending on your shape definition offsets). So in your case i think the position is actually the lower left point of the box2d polygon shape.
In which case you don't need to adjust for width and height because sprites are also drawn from bottom left position. So all you need is ;
setPosition(b2body.getPosition().x , b2body.getPosition().y );
I'm guessing you flip the box2d body when the player looks left the position of the shape is now bottom right so the sprite offset of width/2 and height/2 is from the bottom right instead. So specifically when you are looking left you need an offset of
setPosition(b2body.getPosition().x - getWidth() , b2body.getPosition().y );
I think looking right will be fixed from this, but i don't know for sure how you handle looking left in terms of what you do to the body, but something is done because the offset changes entirely as shown in your capture. If you aren't doing some flipping you could add how you handle looking right to the question.
EDIT
It seems the answer was that the sprite wasn't centered in the sprite sheet and this additional space around the sprite caused the visual impression of being in the wrong place (see comments).

How to rotate all objects of canvas at once using Fabric.js?

I am working on custom product designer which uses Fabric.js. I want to rotate all objects of canvas at once by pressing one button (rotate left, rotate right).
I have achieved this using this code :
stage.forEachObject(function(obj){
obj.setAngle(rotation).setCoords();
});
stage.renderAll();
But it has one bug that every element rotates with its own center point. I want that every element rotates with respect to whole canvas element.
Grouping and rotating the group did not work so well for me. Here is another solution based on this js fiddle.
rotateAllObjects (degrees) {
let canvasCenter = new fabric.Point(canvas.getWidth() / 2, canvas.getHeight() / 2) // center of canvas
let radians = fabric.util.degreesToRadians(degrees)
canvas.getObjects().forEach((obj) => {
let objectOrigin = new fabric.Point(obj.left, obj.top)
let new_loc = fabric.util.rotatePoint(objectOrigin, canvasCenter, radians)
obj.top = new_loc.y
obj.left = new_loc.x
obj.angle += degrees //rotate each object buy the same angle
obj.setCoords()
});
canvas.renderAll()
},
You could add all the objects to a group an then rotate the group. This way you can also set the center for rotation.
This is how it could be solved
function rotate(a) {
var group = new fabric.Group(canvas.getObjects());
//angle is var with scope out of this function,
//so you can use this function as rotate(90) and keep rotating
angle = (angle + a) % 360;
group.rotate(angle);
canvas.centerObject(group);
group.setCoords();
canvas.renderAll();
}
FabricJS rotate everything and maintain the relative position also.
You can download the files here - https://drive.google.com/file/d/1UV1nBdfBk6bg9SztyVoWyLJ4eEZJgZRf/view?usp=sharing

HTML5 Canvas (SIMPLE PAINT APPLICATION)

I'm working on a paint application and I'm trying to centre the canvas onto the middle of the screen . Any attempts I made the detection was off(still at the top left of the screen) but it was visually appearing in the centre of the screen.
Basically it wont draw onto the canvas when I moved it to the centre of the screen.
Any help would be much appreciated , Thanks....
I HAVE MY CODE BELOW ...
It's not clear from your question how you're centring it, but you need to account for any offset of elements which contain your canvas when you attempt to map the mouse position to a position on the canvas. You can do this by including the offsetLeft and offsetTop values (see docs) of the containing element when you do your calculations.
The following will work if you're offsetting the position of the div which wraps the canvas (which I've given an id to make it easier to reference):
function move(e) {
// Get the div containing the canvas. Would be more efficient to set this once on document load
var container = document.getElementById('container');
if((e.button == 0) && (mouseIsDown)) {
g.beginPath();
document.onselectstart = function(){ return false; }
//g.fillStyle = "red";
// Account for the offset of the parent when drawing to the canvas
g.arc((e.x - container.offsetLeft) - brush, (e.y - container.offsetTop) - brush, brush, 0, Math.PI * 2);
g.fill();
g.closePath();
}
}
And a simplistic demo using your fiddle here.

Three.js How to point child of Object3d to face camera?

Here is the thing. I have an Object3d that is composed of 6 planes settled to form a cube. Now, after applying quaternion rotation based on mouse input and after the cube has stopped - I need the cube to turn straight to the camera at its closest side (plane child). What I am doing now is I’m getting the current Euler angles of my Object3d matrix, applying rotation to this matrix and setting it back to my object’s quaternion with setFromRotationMatrix() function. Sometimes this method works (usually at low angles) and sometimes the Z axis behaves wrong (or maybe Y, or even all of them, can’t tell).
Now, I certainly could just calculate the closest side and apply direct quaternion of this side to my object, which works, but that gives me no animation.
I’m using this code to get my current angles: http://www.cs.princeton.edu/~gewang/projects/darth/stuff/quat_faq.html#Q37. Based on that I calculate the closest 90 degrees rotation for every axis:
function lookAtCamDeg(val){
var d = 1;
var s = 1;
var newAngle;
if(val < 0)d*=-1;
val = Math.abs(val);
if(val <= 45)s*=-1;
if(val > 45)val=90-val;
newAngle = val*d*s;
return newAngle;
}
And applying that to turn my cube:
var angles = getAngles();http://www.cs.princeton.edu/~gewang/projects/darth/stuff/quat_faq.html#Q37
var newX = lookAtCamDeg(angles.x);
var newY = lookAtCamDeg(angles.y);
var newZ = lookAtCamDeg(angles.z);
var ma = cube.matrix;
ma = ma.rotateX(-newX*DEGREES)
ma = ma.rotateY(-newY*DEGREES)
ma = ma.rotateZ(-newZ*DEGREES)
cube.quaternion.setFromRotationMatrix(ma);
What I am thinking now is to try using separate planes of my cube (children) and based on their normals apply lookAt() method, but don’t know how to do it, since I need to rotate the whole object, not just one child. Could someone please lead me to the right direction to go? What is the best way to achieve my needs?
The THREE.Quaterion object contains methods for interpolation. So calculate the quaternion value of the original face normal, the quaternion you get here from setFromRotationMatrix(ma), and then apply THREE.Quaterion.slerp() repeatedly to get in-betweens, which can apply to cube.quaternion

canvas isPointInPath does not work with ctx.drawImage()

I suppose this doesn't work because canvas is drawing a bitmap of a vector (and a bitmap is not a path).
Even if it did work, the bitmap is likely always has a rectangular permitter.
Is there any way to leverage something like isPointInPath when using drawImage?
example:
The top canvas is drawn using drawImage and isPointInPath does not work.
The bottom canvas is drawn using arc and isPointInPath works.
a link to my proof
** EDIT **
I draw a circle on one canvas, and use isPointInPath to see if the mouse pointer is inside the circle (bottom canvas in my example).
I also "copy" the bottom canvas to the top canvas using drawImage. Notice that isPointInPath will not work on the top canvas (most likely due to reasons I mentioned above). Is there a work-around I can use for this that will work for ANY kind of path (or bitmap)?
A canvas context has this hidden thing called the current path. ctx.beginPath, ctx.lineTo etc create this path.
When you call ctx.stroke() or ctx.fill() the canvas strokes or fills that path.
Even after it is stroked or filled, the path is still present in the context.
This path is the only thing that isPointInPath tests.
If you want to test if something is in an image you have drawn or a rectangle that was drawn with ctx.fillRect(), that is not possible using built in methods.
Typically you'd want to use a is-point-in-rectangle function that you write yourself (or get from someone else).
If you're looking for how to do pixel-perfect (instead of just the image rectangle) hit detection for an image there are various methods of doing that discussed here: Pixel perfect 2D mouse picking with Canvas
You could try reimplementing ctx.drawImage() to always draw a box behind the image itself, like so (JSFiddle example):
ctx.customDrawImage = function(image, x, y){
this.drawImage(image, x, y);
this.rect(x, y, image.width, image.height);
}
var img1 = new Image();
img1.onload = function(){
var x = y = 0;
ctx.drawImage(img1, x, y);
console.log(ctx.isPointInPath(x + 1, y + 1));
x = 1.25 * img1.width;
ctx.customDrawImage(img1, x, y);
console.log(ctx.isPointInPath(x + 1, y + 1));
Note: you might get side effects like the rectangle appearing over the image, or bleeding through from behind if you are not careful.
To me, isPointInPath failed after canvas was moved. So, I used:
mouseClientX -= gCanvasElement.offsetLeft;
mouseclientY -= gCanvasElement.offsetTop;
I had some more challenges, because my canvas element could be rescaled. So first when I draw the figures, in my case arc, I save them in an array together with a name and draw them:
if (this.coInit == false)
{
let co = new TempCO ();
co.name= sensor.Name;
co.path = new Path2D();
co.path.arc(c.X, c.Y, this.radius, 0, 2 * Math.PI);
this.coWithPath.push(co);
}
let coWP = this.coWithPath.find(c=>c.name == sensor.Name);
this.ctx.fillStyle = color;
this.ctx.fill(coWP.path);
Then in the mouse event, I loop over the items and check if the click event is in a path. But I also need to rescale the mouse coordinates according to the resized canvas:
getCursorPosition(event) {
const rect = this.ctx.canvas.getBoundingClientRect();
const x = ((event.clientX - rect.left ) / rect.width) * this.canvasWidth;
const y = ((event.clientY - rect.top) / rect.height) * this.canvasHeight;
this.coWithPath.forEach(c=>{
if (this.ctx.isPointInPath(c.path, x, y))
{
console.log("arc is hit", c);
//Switch light
}
});
}
So I get the current size of the canvas and rescale the point to the original size. Now it works!
This is how the TempCO looks like:
export class TempCO
{
path : Path2D;
name : string;
}