ClientToWorld value when not interacting with model - autodesk-forge

Is there a method similar to ClientToWorld, that can give me the X,Y world coords if I provide it with X,Y screen coords?
I know that ClientToWorld gives me a Z coord of where it interacts with the model, but I am happy to have no Z coord as it will not raycast to a point on the model.

How about Viewer3dImpl.clientToViewport?
let coords = viewer.impl.clientToViewport(client.x, client.y); //c.Vector3 {x: -0.9696521095484826, y: 0.9200779727095516, z: 1 (always 1)}
let finalCoords = coords.unproject(viewer.impl.camera) //c.Vector3 {x: -26.379134321221724, y: 5.162777223710702, z: 1.3846547842336627}
See unofficial doc (not authoritative & subject to change w/o notice) for this method here

Related

How to move the camera in a forge viewer to face the true north direction, using the euler angles

I understand that forge viewer uses three.js extensively, I have a couple of questions
I want to point my forge viewer camera to the north direction (true north) and further synchronise the rotation based on the north values.
Also is it possible to set the bounds ?
I'm trying to synchronise forge viewer based on a set of euler angles (pitch, yaw and roll) available at my hand
I'm using the forge viewer version 7.
Fareed also asked this via email, so I'm copying & pasting my replies here.
Not sure which source model format you used, so suppose it's Revit (RVT).
In Revit model metadata, two attributes can help calculate the north rotation to the true north.
metadata['world north vector']['XYZ']: The project north vector of the Revit view.
metadata['custom values']['angleToTrueNorth']: The angel from the project north to true north of Revit view.
// Calculate project north angle
const projectNorthVector = new THREE.Vector3().fromArray( model.getData().metadata['world north vector']['XYZ'] );
const autoCam = viewer.autocam;
const frontDirection = autoCam.sceneFrontDirection.clone(); //!<<< viewer world north
const upVector = autoCam.sceneUpDirection.clone();
let crossVector = new THREE.Vector3();
crossVector.crossVectors( frontDirection, projectNorthVector );
const projectNorthAngle = projectNorthVector.angleTo( frontDirection ) * ( crossVector.dot( upVector ) < 0 ? -1 : 1 );
// Calculate true north angle
let trueNorthAngle = metadata['custom values']['angleToTrueNorth'] * (Math.PI / 180);
// Final rotation angle from viewer world north to true north
const finalRotationAngle = projectNorthAngle + trueNorthAngle;
// and then rotate your vector by Z.
Sorry, I'm not familiar with the Euler angles (pitch, yaw, and roll), but from three.js documentation, I can see it uses intrinsic Tait-Bryan angles.
Three.js uses intrinsic Tait-Bryan angles. This means that rotations are performed with respect to the local coordinate system. That is, for order 'XYZ', the rotation is first around the local-X axis (which is the same as the world-X axis), then around local-Y (which may now be different from the world Y-axis), then local-Z (which may be different from the world Z-axis).
So, probably, you can try to get that with the either way below:
Use three.js API to get Euler Tait-Bryan angles from quaternion
const quaternion = viewer.getCamera().quaternion.clone();
const rotation = new THREE.Euler().setFromQuaternion( quaternion, 'XYZ' );
Or get it from the camera's rotation
const { rotation } = viewer.getCamera();
const eulerOrder = rotation.order;
Or refer to the Navisworks approach: https://adndevblog.typepad.com/aec/2019/07/get-roll-value-of-edit-current-viewpoint.html
viewer.navigation.setCameraUpVector( new THREE.Vector3(0,1,0), true );
const quaternion = viewer.getCamera().quaternion.clone();
let { x, y, z, w } = quaternion;
let roll = Math.atan2(2*y*w - 2*x*z, 1 - 2*y*y - 2*z*z);
let pitch = Math.atan2(2*x*w - 2*y*z, 1 - 2*x*x - 2*z*z);
let yaw = Math.asin(2*x*y + 2*z*w);
To set Euler angles to camera, here is an approach, but I think you will need to change the Euler order if it's not XYZ.
const euler = new THREE.Euler(..., ..., ..., 'XYZ');
viewer.getCamera().quaternion..setFromEuler(euler);

Segmenting on Arcs from DWG File

I have an application using the Forge Viewer displaying converted ACAD dwg files. The short description is that I need to take specific polylines out of the dwg file source and use the Edit2D extension to draw them as polygons over the background. I have this working, but arcs are causing some issues right now. This doesn't have to be perfect but it should be decently the same shape. In most cases it is just drawing a line from the start to the end of the arc (and I understand why, see code below) but in other cases it's significantly segmenting the arc and I'm not sure why.
I start by finding the id's of the polylines based on their layer and then getting the fragment ids (this is working fine). Then I get the vertexes for the polyline like this:
export function getVertexesById(
viewer: Autodesk.Viewing.GuiViewer3D,
frags: Autodesk.Viewing.Private.FragmentList,
fragIds: number[],
dbId: number
): Point[] {
// We need to also get the center points of arcs as lines seem to be drawn to them in the callbacks for some
// reason. Center points should later be removed from the point array, so we don't get strange spikes on our shapes.
const polyPoints: Point[] = [];
const centers: Point[] = [];
fragIds.forEach((fid) => {
const mesh = frags.getVizmesh(fid);
const vbr = new Autodesk.Viewing.Private.VertexBufferReader(
mesh.geometry,
viewer.impl.use2dInstancing
);
vbr.enumGeomsForObject(dbId, {
onLineSegment(
x1: number,
y1: number,
x2: number,
y2: number,
_vpId: number
) {
checkAddPoint(polyPoints, { x: x1, y: y1, z: 0 });
checkAddPoint(polyPoints, { x: x2, y: y2, z: 0 });
},
onCircularArc: function (cx, cy, start, end, radius, _vpId) {
centers.push({ x: cx, y: cy, z: 0 });
},
onEllipticalArc: function (
cx,
cy,
start,
end,
major,
minor,
tilt,
_vpId
) {
centers.push({ x: cx, y: cy, z: 0 });
},
onOneTriangle: function (x1, y1, x2, y2, x3, y3, _vpId) {
checkAddPoint(polyPoints, { x: x1, y: y1, z: 0 });
checkAddPoint(polyPoints, { x: x2, y: y2, z: 0 });
checkAddPoint(polyPoints, { x: x3, y: y3, z: 0 });
},
onTexQuad: function (cx, cy, width, height, rotation, _vpId) {
centers.push({ x: cx, y: cy, z: 0 });
},
});
});
centers.forEach((c) => {
checkRemovePoint(polyPoints, { x: c.x, y: c.y, z: 0 });
});
return polyPoints;
}
The functions checkAddPoint and checkRemovePoint are just helper functions that make sure we don't duplicate points and take into account rounding (so we don't get two points that are say 0,0,0 and 0,0.00001,0.
I then use those points to draw with the Edit2D extension. So what I would expect here is that it creates a series of points that would draw along all the straight lines of the polyline and when it gets to an arc it just draws from one endpoint to the other. That is mostly what I see.
Here is an example file as it looks in ACAD:
Notice there are a handful of breaks in the arc around the outside of the room. What I get when I do the above process is this:
Notice all along the top I get what I would expect. However, along the bottom in 2 places I get a huge number of segments all along the line.
I looked back at the ACAD file and exploded the polyline and looked at it as much as I know how and I can't find anything different about those two segments vs. the other that would indicate why it acts differently.
What would be really awesome is if there is an easy way to just segment along an arc say every x units and have it return that but I'm not expecting that here, I just want to know why it is treating these differently.
Any help is greatly appreciated.
Edit
I should also mention that I have logged the creation routine and the only things in this that are ever hit are the onLineSegment and onCircularArc. As you see, the circular arc one only checks to make sure we don't have the center point in the list, so all of these extra points are for some reason being read in the line segment section.

What is the most comprehensible way to create a Rect object from a center point and a size in PyGame?

I want to crate an pygame.Rect object from a center point (xc, yc) and a size (w, h).
pygame.Rect just provides a constructor with the top left point and the size.
Of course I can calculate the top left point:
rect = pygame.Rect(xc - w // 2, yc - h // 2, w, h)
Or I can set the location via the virtual attribute center:
rect = pygame.Rect(0, 0, w, h)
rect.center = xc, yc
If I want to completely confuse someone, I use inflate:
rect = pygame.Rect(xc, yc, 0, 0).inflate(w, h)
Or even clamp:
rect = pygame.Rect(0, 0, w, h).clamp((xc, yc, 0, 0))
Not any of this methods satisfies me. Either I have to calculate something, I have to write several lines of code, or I have to use a function that completely hides what is happening.
I also don't want to write a function (or lambda) as I think this is completely over the top for creating a simple rectangle.
So my question is:
How do you usually create such a rectangle with a self-explanatory line of code so everyone can see what is happening at a glance?
Is there a much easier method? Do I miss something?
Interesting question Rabbid76.
I personally try to write code such that a person with only a general understanding of programming concepts can read the code. This includes absolute beginners (95% of people making PyGame questions), and converts from other languages.
This is why I mostly shy-away from using Python's if x < y < z:, and blah = [ x for x in some-complex-iff-loop ], et.al. syntax. (And that's also why I always put my if conditions in brackets.) Sure if you know python well it doesn't matter, but for an example of why it's important, go try to read a Perl script from the mid 2010's and you see stuff like:
print #$_, "\n" foreach ( #tgs );
It didn't have to be written like that, they could have used a loop-block with some instructive variable names, and not $_, etc.
So bearing the above in mind, the question comes down to - Which is the easiest to read and understand.
So for my 2-cents worth, it has to be option #2:
rect = pygame.Rect(0, 0, w, h)
rect.center = xc, yc
It's absolutely clear to a syntax-ignorant code reader that a rectangle is being created, and some kind of centre-point is being set.
But to make the code more "self documenting", it could be wrapped in a function call:
def getRectAround( centre_point, width, height ):
""" Return a pygame.Rect of size width by height,
centred around the given centre_point """
rectangle = pygame.Rect( 0, 0, w, h ) # make new rectangle
rectangle.center = centre_point # centre rectangle
return rectangle
# ...
rect = getRectAround( ( x, y ), w, h )
Sometimes more code is better.

Libgdx is showing wrong (x, y) coordinates from tiled

When i remove a tile with the coords (example: X: 15, Y: 9) with
TiledMapTileLayer tiledMapTileLayer = (TiledMapTileLayer)map.getLayers().get(0);
tiledMapTileLayer.setCell(15, 9, null);
I notice that actually the wrong tile is removed from the map. Instead tile with the coords X:15 Y: 6 is being removed. What am i doing wrong?
I believe this would be due to libgdx inverting the map to match better with their coordinate system. If your map is 16 tiles high, trying to remove tile at Y: 9 will result in removal of tile at Y: 16 - 9 - 1 = 6.
If you want to copy a Y coordinate from Tiled and put it in your code, you'll in general need to apply the following conversion to turn it into the same location in libgdx:
int y = tileLayer.getHeight() - 1 - [Y coordinate from Tiled];

Projection drift when rendering in WebGL over Google Map

I am trying to implement a WebGL-based rendering on Google Map (api3) as I want to render a massive amount of dynamic geometries.
Basically, I create a google.maps.OverlayView attached with a WebGL canvas into the map.
However, I encountered some problem with the mapping of the projection. Basically, I extracted the "fromLatLngToPoint" function from the googlemap api as follows:
function fromLatLngToPoint(a){
var c={x:0,y:0},
d=this.j;
c.x=d.x+a.lng*this.B;
var e=oe(m.sin(re(a.lat)),-(1-1E-15),1-1E-15);
c.y=d.y+.5*m.log((1+e)/(1-e))*-this.F;
return c
}
function oe(a,b,c){null!=b&&(a=m.max(a,b));null!=c&&(a=m.min(a,c));return a}
function re(a){return m.PI/180*a}
Then I implemented it in my vertex shader based on the documentation in Google Map Coordinates.
Basically, I have a event listener to send the updated projection constants, the viewport bounds, and the zoom level to my shader.
Then my shader will calculate the new screen coordinates based on these inputs.
highp float e, x, y, offsetY, offsetX;
// projection transformation for target points
e = sin(p.y* PI/180.0);
y = prj_y + 0.5 * log((1.0+e)/(1.0-e))*(-F);
x = prj_x + p.x*B;
// projection transformation for offset (bounds)
e = sin(bound_y*PI/180.0);
offsetY = prj_y + 0.5 * log((1.0+e)/(1.0-e))*(-F);
offsetX = prj_x + bound_x*B;
// calculate actual pixel coord wrt zoom/numTiles
x = (x* numTiles - offsetX* numTiles);
y = (y* numTiles - offsetY* numTiles);
gl_PointSize = 5.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4(x,y,0.0,1.0);
However, as shown in the screenshot below, it seems there are some errors? The rendered geometries are distorted. (I used the google map polygon api to render some of the geometries as comparison)
Screenshot Here
I am totally at a loss, what might be the reason for this distortion?
I am suspecting that the single precision in the shader is giving rise to the error. So I am wondering if there is any workaround?
It is hard to debug this piece of code and diagnose the cause of the issue. I would suggest you using the CanvasLayer library that hides all these concrete details of specifying the coordinates you want to draw the polygon. Rather you would be able to focus on your app code and functionality. The performance will be better in terms of projected image.