How can I get the viewer coordinates of AutoCAD geometry? - autodesk-forge

I am using the 2D Autodesk Forge Viewer, and I'm looking for a way to determine the X,Y coordinate of a block reference object from AutoCAD.
I have the dbID for the geometry element, and. I can get some information through NOP_VIEWER.getProperties() and NOP_VIEWER.getDimensions(), but neither of those have the X,Y coordinate.

With help from Xiaodong below, I was able to devise the following solution to get the X,Y coordinate of an object using its dbId
const geoList = NOP_VIEWER.model.getGeometryList().geoms;
const readers = [];
for (const geom of geoList) {
if (geom) {
readers.push(new Autodesk.Viewing.Private.VertexBufferReader(geom, NOP_VIEWER.impl.use2dInstancing));
}
}
const findObjectLocation = (objectId) => {
for (const reader of readers) {
let result;
reader.enumGeomsForObject(objectId, {
onLineSegment: (x, y) => {
result = { x, y };
},
});
if (result) {
return result;
}
}
throw new Error(`Unable to find requested object`);
};

As I remember, it is true the position data is not available with block entity. I will check with engineer team if there is any comment about the native position data of block. One alterative is to use Forge Design Automation of AutoCAD to extract the data yourself, while it would require additional more code.
After Forge translates the source DWG, the entities are converted to primitives. By API, it is feasible to get geometry info of the primitives, such as start point of line, center of circle. The two blogs tell in details:
https://forge.autodesk.com/blog/working-2d-and-3d-scenes-and-geometry-forge-viewer
https://forge.autodesk.com/blog/working-2d-and-3d-scenes-and-geometry-forge-viewer
Essentially, it uses the callback function:
VertexBufferReader.prototype.enumGeomsForObject = function(dbId, callback)
The callback object needs these optional functions:
• onLineSegment(x0, y0, x1, y1, viewport_id)
• onCircularArc(centerX, centerY, startAngle, endAngle, radius, viewport_id)
• onEllipticalArccenterX, centerY, startAngle, endAngle, major, minor, tilt, viewport_id)
• onTriangleVertex(x, y, viewport_id)
.

Related

Segmenting on Arcs from DWG File

I have an application using the Forge Viewer displaying converted ACAD dwg files. The short description is that I need to take specific polylines out of the dwg file source and use the Edit2D extension to draw them as polygons over the background. I have this working, but arcs are causing some issues right now. This doesn't have to be perfect but it should be decently the same shape. In most cases it is just drawing a line from the start to the end of the arc (and I understand why, see code below) but in other cases it's significantly segmenting the arc and I'm not sure why.
I start by finding the id's of the polylines based on their layer and then getting the fragment ids (this is working fine). Then I get the vertexes for the polyline like this:
export function getVertexesById(
viewer: Autodesk.Viewing.GuiViewer3D,
frags: Autodesk.Viewing.Private.FragmentList,
fragIds: number[],
dbId: number
): Point[] {
// We need to also get the center points of arcs as lines seem to be drawn to them in the callbacks for some
// reason. Center points should later be removed from the point array, so we don't get strange spikes on our shapes.
const polyPoints: Point[] = [];
const centers: Point[] = [];
fragIds.forEach((fid) => {
const mesh = frags.getVizmesh(fid);
const vbr = new Autodesk.Viewing.Private.VertexBufferReader(
mesh.geometry,
viewer.impl.use2dInstancing
);
vbr.enumGeomsForObject(dbId, {
onLineSegment(
x1: number,
y1: number,
x2: number,
y2: number,
_vpId: number
) {
checkAddPoint(polyPoints, { x: x1, y: y1, z: 0 });
checkAddPoint(polyPoints, { x: x2, y: y2, z: 0 });
},
onCircularArc: function (cx, cy, start, end, radius, _vpId) {
centers.push({ x: cx, y: cy, z: 0 });
},
onEllipticalArc: function (
cx,
cy,
start,
end,
major,
minor,
tilt,
_vpId
) {
centers.push({ x: cx, y: cy, z: 0 });
},
onOneTriangle: function (x1, y1, x2, y2, x3, y3, _vpId) {
checkAddPoint(polyPoints, { x: x1, y: y1, z: 0 });
checkAddPoint(polyPoints, { x: x2, y: y2, z: 0 });
checkAddPoint(polyPoints, { x: x3, y: y3, z: 0 });
},
onTexQuad: function (cx, cy, width, height, rotation, _vpId) {
centers.push({ x: cx, y: cy, z: 0 });
},
});
});
centers.forEach((c) => {
checkRemovePoint(polyPoints, { x: c.x, y: c.y, z: 0 });
});
return polyPoints;
}
The functions checkAddPoint and checkRemovePoint are just helper functions that make sure we don't duplicate points and take into account rounding (so we don't get two points that are say 0,0,0 and 0,0.00001,0.
I then use those points to draw with the Edit2D extension. So what I would expect here is that it creates a series of points that would draw along all the straight lines of the polyline and when it gets to an arc it just draws from one endpoint to the other. That is mostly what I see.
Here is an example file as it looks in ACAD:
Notice there are a handful of breaks in the arc around the outside of the room. What I get when I do the above process is this:
Notice all along the top I get what I would expect. However, along the bottom in 2 places I get a huge number of segments all along the line.
I looked back at the ACAD file and exploded the polyline and looked at it as much as I know how and I can't find anything different about those two segments vs. the other that would indicate why it acts differently.
What would be really awesome is if there is an easy way to just segment along an arc say every x units and have it return that but I'm not expecting that here, I just want to know why it is treating these differently.
Any help is greatly appreciated.
Edit
I should also mention that I have logged the creation routine and the only things in this that are ever hit are the onLineSegment and onCircularArc. As you see, the circular arc one only checks to make sure we don't have the center point in the list, so all of these extra points are for some reason being read in the line segment section.

Autodesk Forge Viewer transform 2D to 3D coordinates

I am testing the possibilities of the forge V7 viewer on a web browser with a revit model.
The idea of ​​the test is:
display 2 viewers simultaneously with 1 3D view and a 2D view
click on a point of the 2D view and
display a sphere in the 2D view on this point
display another sphere in the 3D view at the equivalent point
the spheres are added with SceneBuilder (to allow them to be selected later)
I tried to follow the behavior of
https://forge.autodesk.com/ja/node/1765 and
https://github.com/Autodesk-Forge/viewer-navigation.sample
and i have questions
1 - the 2D view must be a sheet, using a 2D view directly (a floor view for example) does not allow the calculation, is that correct?
2 - https://forge.autodesk.com/ja/node/1765 seems to have calculation and/or precision issues (cf forge1.png generated from the test in Heroku and forge2.png generated by my program)
3 - the method only works if you click on an object
how can I retrieve the coordinates if I click in an empty area?
4 - the spheres are identified in the viewer (cf forge3.png), but
a - how to give them a name (and not "object")
b - how to replace the name "model" by the name of the scene?
Can you help me ?
Thanks in advance
Luc
Here is my code
// click listener on 2d viewer
function listenerScene(ev){
var intersection = viewer2d.hitTest(ev.offsetX, ev.offsetY);
if (intersection) {
AddItem(intersection.intersectPoint.x, intersection.intersectPoint.y,
intersection.intersectPoint.z);
AddItem3d(intersection);
}
// add in 2d
async function AddItem(x,y,z){
... add a sphere at (x,y,z)
}
// add in 3d
async function AddItem3d(intersection) {
const worldPos = sheetToWorld(intersection.intersectPoint,
viewer2d.model,viewer.model);
if (worldPos) {
... add a sphere at worldPos
}
}
// compute 3d point
function sheetToWorld(sheetPos, model2d, model3d) {
const viewportExt = viewer2d.getExtension('Autodesk.AEC.ViewportsExtension');
const viewport = viewportExt.findViewportAtPoint(model2d, new THREE.Vector2(sheetPos.x, sheetPos.y));
if (!viewport) {
return null;
}
const sheetUnitScale = model2d.getUnitScale();
const globalOffset = model3d.getData().globalOffset;
const matrix = viewport.get2DTo3DMatrix(sheetUnitScale);
var worldPos = sheetPos.clone().applyMatrix4(matrix);
worldPos = worldPos.sub(globalOffset);
return worldPos;
}

GLTF file not well positioned by Cesium

I want to display a hurricane (big isosurface object) in Cesium. For this I converted an OBJ file with longitude, latitude, altitude columns for each vertex of the isosurface representing the hurricane, in a new OBJ file reprojected in ECEF (Earth Centered) projection.So the final OBJ file contains now X,Y,Z for each vertex instead of longitude, latitude, altitude. After final reformat by obj2gltf, I try to display the GLTF "hurricane" file in Cesium.JS using the code below:
console.log('loading hurricane.gltf';
var mymodel = viewer.scene.primitives.add(Cesium.Model.fromGltf({
url : 'data/hurricane.gltf',
modelMatrix : Cesium.Matrix4.IDENTITY,
asynchronous: false
}));
I can see my hurricane on the earth, but not at the good position. I suspect a problem of matrix. IDENTITY matrix seems not to be the good one. I could try to make a new matrix but I can't find enough informations about the axes orientation used by Cesium.
I verified the X,Y,Z ECEF coordinates, they are good. Does anyone already meet this problem ?
If your glTF model origin is at the center of the hurricane, you can place it using a Cesium Entity, something like this:
// Longitude degrees, Latitude degrees, height in meters
var position = Cesium.Cartesian3.fromDegrees(-123.0744619, 44.0503706, height);
var heading = Cesium.Math.toRadians(0);
var pitch = 0;
var roll = 0;
var hpr = new Cesium.HeadingPitchRoll(heading, pitch, roll);
var orientation = Cesium.Transforms.headingPitchRollQuaternion(position, hpr);
var entity = viewer.entities.add({
name : 'Hurricane',
position : position,
orientation : orientation,
model : {
uri : 'data/hurricane.gltf'
}
});
viewer.trackedEntity = entity;
There are more complete working demos of this on Sandcastle.
But, if your hurricane is visible on the surface of the Earth using the identity matrix, that likely means that the origin of that model is nowhere near the center of the hurricane. You may need to edit the glTF file, to make sure that the model is centered on its own origin, and does not have some fixed Earth location pre-baked into the model's internal transformations.

Autodesk Forge Viewer : f2d get frag from dbid

I am trying to fill in room with color on a revit converted file's 2d viewer.
I have a Revit file that has "rooms" defined. The Revit file also has sheets defined "Floor one", "Floor two". When I convert it using the Forge API
I get a svf for the Revit 3D view and f2d files for "Floor one" and "Floor Two"
sheets.
For the svf I was able to get fragid from dbids other post
Now Im trying to do the same for the f2d files.
I am able to change the color of the room walls if I know the wall shapes dbid by using
viewer.setThemingColor(dbid, new THREE.Vector4(0, 1, 1,1));
What I want to do now is be able to get the fragid of the shape on 2d so that I can get the start and stop vertices of the lines it uses. I want to know these vertices so I can build a custom mesh and fill it in with color for room "hatching".
My problem is that I do not know the f2d format. It seems it is all one mesh and lets the shader control the color of the lines. Can anyone give me any pointers on how to the the fragment list of the room?
This is what I used for the 3d svf
function getFragIdFromDbId(viewer, dbid) {
var returnValue;
var it = viewer.model.getData().instanceTree;
it.enumNodeFragments(dbid, function (fragId) {
//console.log("dbId: " + dbid + " FragId : " + fragId);
returnValue = fragId;
}, false);
return returnValue;
}
What can I use for f2d to do the same when the f2d has viewer.model.getData().instanceTree = undefined?
Fragments can have geometry for multiple dbids and the geometry for a dbid can be in multiple fragments. It is possible to extract with Autodesk.Viewing.Private.VertexBufferReader, used by the 2D snapper that helps. You could do something like this:
FragmentList.dbid2fragId[dbid] will return the fragment id or an array of fragment ids that contain geometry for the dbid.
Loop through the fragments and get the geometry for each fragment.
Create a VertexBufferReader using the geometry.
Use the VertexBufferReader to find the geometry for a dbid.
The best way to find the geometry is to use VertexBufferReader.enumGeomsForObject(dbid, callback). It uses a callback object to enumerate geometry for a dbid. The callback object needs these optional functions:
onLineSegment(x0, y0, x1, y1, viewport_id)
onCircularArc(centerX, centerY, startAngle, endAngle, radius, viewport_id)
onEllipticalArccenterX, centerY, startAngle, endAngle, major, minor, tilt, viewport_id)
onTriangleVertex(x, y, viewport_id)
This is OK if you just need the primitives and not where they are in the buffer.
You can also use the VertexBufferReader to loop through the geometry in the buffer looking for the dbid. This requires you to know the a primitive in the vertex buffer is 4 vertices if .useInstancing() is false and 1 vertex if .useInstancing() is true. And you need to decode the primitive type from .getVertexFlagsAt(vertexIndex) but we don’t any public values or methods for decoding the flags.

Projection drift when rendering in WebGL over Google Map

I am trying to implement a WebGL-based rendering on Google Map (api3) as I want to render a massive amount of dynamic geometries.
Basically, I create a google.maps.OverlayView attached with a WebGL canvas into the map.
However, I encountered some problem with the mapping of the projection. Basically, I extracted the "fromLatLngToPoint" function from the googlemap api as follows:
function fromLatLngToPoint(a){
var c={x:0,y:0},
d=this.j;
c.x=d.x+a.lng*this.B;
var e=oe(m.sin(re(a.lat)),-(1-1E-15),1-1E-15);
c.y=d.y+.5*m.log((1+e)/(1-e))*-this.F;
return c
}
function oe(a,b,c){null!=b&&(a=m.max(a,b));null!=c&&(a=m.min(a,c));return a}
function re(a){return m.PI/180*a}
Then I implemented it in my vertex shader based on the documentation in Google Map Coordinates.
Basically, I have a event listener to send the updated projection constants, the viewport bounds, and the zoom level to my shader.
Then my shader will calculate the new screen coordinates based on these inputs.
highp float e, x, y, offsetY, offsetX;
// projection transformation for target points
e = sin(p.y* PI/180.0);
y = prj_y + 0.5 * log((1.0+e)/(1.0-e))*(-F);
x = prj_x + p.x*B;
// projection transformation for offset (bounds)
e = sin(bound_y*PI/180.0);
offsetY = prj_y + 0.5 * log((1.0+e)/(1.0-e))*(-F);
offsetX = prj_x + bound_x*B;
// calculate actual pixel coord wrt zoom/numTiles
x = (x* numTiles - offsetX* numTiles);
y = (y* numTiles - offsetY* numTiles);
gl_PointSize = 5.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4(x,y,0.0,1.0);
However, as shown in the screenshot below, it seems there are some errors? The rendered geometries are distorted. (I used the google map polygon api to render some of the geometries as comparison)
Screenshot Here
I am totally at a loss, what might be the reason for this distortion?
I am suspecting that the single precision in the shader is giving rise to the error. So I am wondering if there is any workaround?
It is hard to debug this piece of code and diagnose the cause of the issue. I would suggest you using the CanvasLayer library that hides all these concrete details of specifying the coordinates you want to draw the polygon. Rather you would be able to focus on your app code and functionality. The performance will be better in terms of projected image.