How to reproduce the sectionbox cut form revit in froge viewer? - autodesk-forge

I'm working on an app that can reproduce the section box cut from revit in forge viewer. I've got the max point and min point of section box using the code below in revit api:
BoundingBoxXYZ currentSectionBox = view3D.GetSectionBox();
double[] minPt = new double[] {
currentSectionBox.Transform.Origin.X + currentSectionBox.Min[0],
currentSectionBox.Transform.Origin.Y + currentSectionBox.Min[1],
currentSectionBox.Transform.Origin.Z + currentSectionBox.Min[2]
};
double[] maxPt = new double[] {
currentSectionBox.Transform.Origin.X + currentSectionBox.Max[0],
currentSectionBox.Transform.Origin.Y + currentSectionBox.Max[1],
currentSectionBox.Transform.Origin.Z + currentSectionBox.Max[2]
};
And it can be reproduced by this code same in the revit:
...
// View3D is the current opened 3d view in revit
View3D.SetSectionBox(new BoundingBoxXYZ() {
Max = new XYZ(maxPt[0], maxPt[1], maxPt[2]),
Min = new XYZ(minPt[0], minPt[1], minPt[2])
});
So far so good, then I used the same max and min point in forge viewer. I expected to see the same result in revit, but it didn't. Is anything wrong in my code or I just misunderstand some concept about it?
let offset = this.Viewer3D.model.getData().globalOffset
offset = new THREE.Vector3(offset.x, offset.y, offset.y)
const sectionBoxPosition = new THREE.Box3(minPt.sub(offset), maxPt.sub(offset))
this.Viewer3D.loadExtension('Autodesk.Section').then(function (Section) {
Section.setSectionBox(sectionBoxPosition)
})

The way you used to apply the global offset looks incorrect.
I would advise you to do the calculation like the below using viewer.model.getModelToViewerTransform() to get the correct model to transform in the viewer. Sometimes, there are other transforms made to the model except for the global offset.
var boxMinFromRvt = new THREE.Vector3(-20.9539269351606, -128.710696355516, -43.8630604978775); //!<<< From Revit API's BoundingBoxXYZ.Min
var boxMaxFromRvt = new THREE.Vector3(73.7218284399634, 102.143481472216, 43.8630604978775); //!<<< From Revit API's BoundingBoxXYZ.Max
var boxTransformFromRvt = new Autodesk.Viewing.Private.LmvMatrix4(true).fromArray([1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 7.49058457162347, -35.3016883690933, 16.5749846091418, 1]); //!<<< From Revit API's BoundingBoxXYZ. Transform
var modelTransform = viewer.model.getModelToViewerTransform();
var minOffseted = boxMinFromRvt.clone().applyMatrix4(boxTransformFromRvt).applyMatrix4(modelTransform);
var maxOffseted = boxMaxFromRvt.clone().applyMatrix4(boxTransformFromRvt).applyMatrix4(modelTransform);
var box = new THREE.Box3(minOffseted, maxOffseted);
viewer.getExtension('Autodesk.Section').setSectionBox(box);
Here are the snapshots of my tests:
In Revit
In Forge Viewer

Related

x, y, z coordinates of an object for a nwc file in forge viewer

I am trying to find x,y,z coordinates of an object inside nwc model, and I am using below code.
Despite that this code was working for rvt files, it is not working for nwc model.
Is there a way to get x,y,z coordinates from a nwc model?
getFragmentWorldMatrixByNodeId(nodeId) {
let result = {
fragId: [],
matrix: [],
};
let viewer = this.viewer;
this.tree.enumNodeFragments(nodeId, function (frag) {
let fragProxy = viewer.impl.getFragmentProxy(viewer.model, frag);
let matrix = new THREE.Matrix4();
fragProxy.getWorldMatrix(matrix);
result.fragId.push(frag);
result.matrix.push(matrix);
});
return result;
}
You mentioned you are looking for the "x,y,z coordinates of an object". What exactly do you mean by that? I'm going to assume that you want the coordinates of the center point of the object's bounding box, as that is what people ask for usually. In your code snippet, however, you're retrieving the entire transformation matrix, not a position.
If the center point of a bounding box works for you, you could obtain it like so:
function getObjectBoundingBox(model, dbid) {
const tree = model.getInstanceTree();
const frags = model.getFragmentList();
let totalBounds = new THREE.Box3();
tree.enumNodeFragments(dbid, function (fragid) {
let fragBounds = new THREE.Box3();
frags.getWorldBounds(fragid, fragBounds);
totalBounds.union(fragBounds);
}, true);
return totalBounds;
}
getObjectBoundingBox(viewer.model, 123).center();

Revit shared coordinates to Forge viewer

What is the correct process for getting a transform between Forge coordinates and Revit's shared coordinates? I know there is globalOffset, but does it reference the Revit project internal coordinate system or shared coordinates?
Update Jun 11th, 2021
Now my MultipleModelUtil.js supports the alignments I shared below. Also, we can easily tell Forge Viewer to use By shared coordinates to aggregate models. Here is the code snippet, and you can check out here to know supported alignments
const util = new MultipleModelUtil( viewer );
util.options = {
alignment: MultipleModelAlignmentType.ShareCoordinates
};
const models = [
{ name: '1.rvt', urn: 'urn:dXJuOmFkc2sud2lwcHJvZDpmcy5maWxlOnZmLlNpaHgxOTVuUVJDMHIyWXZUSVRuZFE_dmVyc2lvbj0x' },
{ name: '2.rvt', urn: 'urn:dXJuOmFkc2sud2lwcHJvZDpmcy5maWxlOnZmLldVRHJ4ajZ6UTBPLTRrbWZrZ3ZoLUE_dmVyc2lvbj0x' },
{ name: '3.rvt', urn: 'urn:dXJuOmFkc2sud2lwcHJvZDpmcy5maWxlOnZmLjRyZW5HRTNUU25xNHhYaW5xdWtyaWc_dmVyc2lvbj0x' }
];
util.processModels( models );
==================
First, Forge Viewer supports 3 kinds of Revit link methods as the below, and you can take a look at the 3rd one (By shared coordinates).
Origin to origin: Apply the globalOffset of the 1st model to others. Check MultipleModelUtil/MultipleModelUtil.js for the demo
Center to center: the default way of the viewer.
By shared coordinates: set up applyRefpoint: true and make the globalOffset to the refPoint. This method is the one you are looking for.
The refPoint is the Revit survey point location inside Revit internal coordinate system. It's accessible with the AecModelData. Meanwhile, you can take advantage of the AggregatedView to use this aligning option. Here is an example of telling how to use AggregatedView:
https://gist.github.com/yiskang/c404af571ba4d631b5929c777503891e
If you want to use this logic with the Viewer class directly, here is a code snippet for you:
let globalOffset = null;
const aecModelData = bubbleNode.getAecModelData();
const tf = aecModelData && aecModelData.refPointTransformation; // Matrix4x3 as array[12]
const refPoint = tf ? { x: tf[9], y: tf[10], z: 0.0 } : { x: 0, y: 0, z: 0 };
// Check if the current globalOffset is sufficiently close to the refPoint to avoid inaccuracies.
const MaxDistSqr = 4.0e6;
const distSqr = globalOffset && THREE.Vector3.prototype.distanceToSquared.call(refPoint, globalOffset);
if (!globalOffset || distSqr > MaxDistSqr) {
globalOffset = new THREE.Vector3().copy(refPoint);
}
viewer.loadDocumentNode(doc, bubbleNode, { applyRefpoint: true, globalOffset: globalOffset, keepCurrentModels: true });
The bubbleNode can be either of the following:
bubbleNode = doc.getRoot().getDefaultGeometry()
//Or
const viewables = viewerDocument.getRoot().search({'type':'geometry'});
bubbleNode = viewables[0];
To get AecModelData, please refer to my gist: https://gist.github.com/yiskang/c404af571ba4d631b5929c777503891e#file-index-html-L67
// Call this line before using AecModelData
await doc.downloadAecModelData();
// doc.downloadAecModelData(() => resolve(doc));
See here for details of the AecModelData: https://forge.autodesk.com/blog/consume-aec-data-which-are-model-derivative-api
I've also found success feeding the refPointTransformation into a matrix4.
This way, the orientation of the model is also taken into account. (This is based off Eason's Answer).
const bubbleNode = doc.getRoot().getDefaultGeometry();
await doc.downloadAecModelData();
const aecModelData = bubbleNode.getAecModelData();
const tf = aecModelData && aecModelData.refPointTransformation;
const matrix4 = new THREE.Matrix4()
.makeBasis(
new THREE.Vector3(tf[0], tf[1], tf[2]),
new THREE.Vector3(tf[3], tf[4], tf[5]),
new THREE.Vector3(tf[6], tf[7], tf[8])
)
.setPosition(new THREE.Vector3(tf[9], tf[10], tf[11]))
viewer.loadDocumentNode(doc, viewables, {
placementTransform: matrix4,
keepCurrentModels: true,
globalOffset: {
"x": 0,
"y": 0,
"z": 0
},
applyRefpoint: true,
applyScaling: 'ft',
})

Forge Viewer, raster images

Is possible to directly load a raster image (PNG, JPG, TIFF) to Forge Viewer?
I see the Autodesk.PDF add-in that can load PDF, I cant find any Autodesk.IMAGE add-in...
Otherwise I need to prior convert Image into PDF and than load it through Autodesk.PDF.
The Autodesk Forge Viewer is based on Three.js - therefore you can use the Three.js API to load an image/texture, there is no need of a Viewer extension for that.
However it depends what you want to do. In case you just want to load an image in the scene, that code is enough.
const texture = THREE.ImageUtils.loadTexture( "thumbnail256.png" );
const material = new THREE.MeshBasicMaterial({ map : texture });
const geometry = new THREE.PlaneGeometry(5, 20, 32);
const planeMesh = new THREE.Mesh(geometry, material);
const planeMesh.position.set(1, 2, 3);
NOP_VIEWER.overlays.addScene('custom-scene');
NOP_VIEWER.overlays.addMesh(planeMesh, 'custom-scene');
But if you want to apply the texture on an existing element in the loaded scene, you need to proceed like this:
const texture = THREE.ImageUtils.loadTexture( "thumbnail256.png" );
const material = new THREE.MeshBasicMaterial({ map : texture, side: THREE.DoubleSide });
NOP_VIEWER.impl.matman().addMaterial('custom-material', material, true);
const model = NOP_VIEWER.model;
model.unconsolidate(); // If the model is consolidated, material changes won't have any effect
const tree = model.getInstanceTree();
const frags = model.getFragmentList();
const dbids = NOP_VIEWER.getSelection();
for (const dbid of dbids) {
tree.enumNodeFragments(dbid, (fragid) => {
frags.setMaterial(fragid, material);
});
}
NOP_VIEWER.impl.invalidate(true, true, false);
Note you may need to work out the texture uv, depending on the geometry.

How to change the color of sphere objects dynamically (used SceneBuilder in Autodesk forge)

I am working on the example from Custom models in Forge Viewer blog by Petr Broz. I am facing issue in updating the color of sphere objects dynamically. I am getting the value of sphere's color from a json file like this "color": "#FF0000". I have created 3 spheres and I am getting the color of first sphere for the rest also. Why the color is not updating for the other spheres? If the problem is on using same material then I tried giving the sphereMaterial in array also as shown below. Is that wrong or how can i update the color?
var spherecolor='';
var sphereMaterial = [];
const button = document.getElementById('button-geometry');
button.addEventListener('click', async function () {
const sceneBuilder = await viewer.loadExtension('Autodesk.Viewing.SceneBuilder');
const modelBuilder = await sceneBuilder.addNewModel({ conserveMemory: true, modelNameOverride: 'My Custom Model' });
for (var i = 0; i < numOfSphere;i++) {
addGeometry(modelBuilder, jsonGeomConfig.geom[i].dbId, i);
}
});
function addGeometry(modelBuilder, dbId, i) {
const sphereGeometry = new THREE.BufferGeometry().fromGeometry(new THREE.SphereGeometry(0.05, 8, 10));
//Getting spherecolor from json file
spherecolor = jsonGeomConfig.geom[i].color;
sphereMaterial[i] = new THREE.MeshPhongMaterial({ color: spherecolor });
const sphereTransform = new THREE.Matrix4().compose(
new THREE.Vector3(jsonGeomConfig.geom[i].Position.posX, jsonGeomConfig.geom[i].Position.posY, jsonGeomConfig.geom[i].Position.posZ),
new THREE.Quaternion(0, 0, 0, 1),
new THREE.Vector3(2,2,2)
);
modelBuilder.addMaterial('MyCustomMaterial', sphereMaterial[i]);
const sphereGeomId = modelBuilder.addGeometry(sphereGeometry);
const sphereFragId = modelBuilder.addFragment(sphereGeomId, 'MyCustomMaterial', sphereTransform);
modelBuilder.changeFragmentsDbId(sphereFragId, dbId);
}
Be sure to give the materials with different colors different names ... otherwise it'd get overridden - see this live environment:
modelBuilder.addMaterial('MyCustomMaterial'+i, sphereMaterial[i]);
const sphereGeomId = modelBuilder.addGeometry(sphereGeometry);
const sphereFragId = modelBuilder.addFragment(sphereGeomId, 'MyCustomMaterial'+i, sphereTransform);

Forge Viewer - getWorldCoordinates is giving different values on different occasions

I created below function to get worldCoordinates back, but it gives different values on two occasions.
While clicking a dbId, I get dbid cordinates and I pass it to below function which give me world coordinates, but you can see while I save that dbId selection to DB and reloading page next time to see it back, it gives me different coordinates.
Why it happen so?
saving dbid phase
dbid coordinates
x: -26.277027130126953
y: 18.102033615112305
z: -7.173819303512573
getWorldCoordinates
x: 256.76347287180107
y: 306.8180434914181
z: 0
relaoding page phase
dbid coordinates
x: -26.277027130126953
y: 18.102033615112305
z: -7.173819303512573
getWorldCoordinates
x: 422.50000131979897
y: 249.49997927733767
z: 0
function getWorldCoordinates(position){
var screenpoint = viewer.worldToClient(
new THREE.Vector3(position.x,
position.y,
position.z,));
return screenpoint
}
function getObjPosition(dbId) {
function getObjPosition(dbId) {
const model = viewer.model;
const instanceTree = model.getData().instanceTree;
const fragList = model.getFragmentList();
let bounds = new THREE.Box3();
instanceTree.enumNodeFragments( dbId, ( fragId ) => {
let box = new THREE.Box3();
fragList.getWorldBounds( fragId, box );
bounds.union( box );
}, true );
const position = bounds.center();
return position;
}
Unfortunately I was unable to reproduce the issue...
Try live demo here - refresh the page and see the world coords prompted when the model completes loading ...
Be sure to convert the coords after the model finishes loading (e.g. after the TEXTURES_LOADED_EVENT) otherwise you may get erratic results:
NOP_VIEWER.addEventListener(Autodesk.Viewing.TEXTURES_LOADED_EVENT,()=>{
alert(JSON.stringify(NOP_VIEWER.worldToClient(
new THREE.Vector3(-26.277027130126953,
18.102033615112305,
-7.173819303512573,))))
})