Is there a way to get element position information from a Revit project viewed on Forge Viewer? We will be moving elements on Forge Viewer (mostly Family Instances) and I need to find out the new position of the elements after they are moved. It should match with the Revit API LocationPoint data.
When a design file (Revit, or any other supported file format) is processed by the Model Derivative service, and converted into the viewer format (SVF or SVF2), every element is turned into a "fragment" with its own transformation matrix. You can read (and even modify) the fragment information using the Viewer API:
const frags = viewer.model.getFragmentList();
function listFragmentProperties(fragId) {
console.log('Fragment ID:', fragId);
const objectIds = frags.getDbIds(fragId); // Get IDs of all objects linked to this fragment
console.log('Linked object IDs:', objectIds);
let matrix = new THREE.Matrix4();
frags.getWorldMatrix(fragId, matrix); // Get the fragment's world matrix
console.log('World matrix:', matrix);
let bbox = new THREE.Box3();
frags.getWorldBounds(fragId, bbox); // Get the fragment's world bounds
console.log('World bounds:', bbox);
}
And to modify the transformation of a fragment, try the following:
const frags = viewer.model.getFragmentList();
function modifyFragmentTransform(fragId) {
let scale = new THREE.Vector3();
let rotation = new THREE.Quaternion();
let translation = new THREE.Vector3();
frags.getAnimTransform(fragId, scale, rotation, translation);
translation.z += 10.0;
scale.x = scale.y = scale.z = 1.0;
frags.updateAnimTransform(fragId, scale, rotation, translation);
}
Related
We are using forge viewer(v7) in our web application.
Our requirement is to crop particular room/area from the forge viewer. For example, if we have shown a house model in the forge viewer then if a user select a kitchen(from menu or navbar) then the viewer should show only kitchen area (including all its objects like cabinets, burner, fridge, sink etc.) and all other objects/sections should be hidden. Similarly for bedrooms, baths etc. It will be just for viewing purpose at run time and not for any automation.
We are getting room coordinates(min and max X, Y, Z) with the help of following using forge API(with Revit engine).
GeometryElement geoElement = room.ClosedShell;
BoundingBoxXYZ boundingBox = geoElement.GetBoundingBox();
XYZ min = boundingBox.Min;
XYZ max = boundingBox.Max;
We are using viewer.setCutPlanes function to draw cutplanes in viewer.
var minPt = new THREE.Vector3(x,y,z); //!<<< put your point here
var maxPt = new THREE.Vector3(x,y,z); //!<<< put your point here
const normals = [
new THREE.Vector3(1, 0, 0),
new THREE.Vector3(0, 1, 0),
new THREE.Vector3(0, 0, 1),
new THREE.Vector3(-1, 0, 0),
new THREE.Vector3(0, -1, 0),
new THREE.Vector3(0, 0, -1)
];
const bbox = new THREE.Box3(minPt, maxPt);
const cutPlanes = [];
for (let i = 0; i < normals.length; i++) {
const plane = new THREE.Plane(normals[i], -1 * maxPt.dot(normals[i]));
// offset plane with negative normal to form an octant
if (i > 2) {
const ptMax = plane.orthoPoint(bbox.max);
const ptMin = plane.orthoPoint(bbox.min);
const size = new THREE.Vector3().subVectors(ptMax, ptMin);
plane.constant -= size.length();
}
const n = new THREE.Vector4(plane.normal.x, plane.normal.y, plane.normal.z, plane.constant);
cutPlanes.push(n);
}
viewer.setCutPlanes(cutPlanes);
But when we are passing these coordinates (obtained for API) to this front end JS code the cutPlanes are getting created at incorrect position/points. For example when we are passing coordinates of kitchen its cropping the small portion of roof and same with all other room.
The possible reason is that the Revit & forge viewer coordinates are not same.
Does anyone have an idea that how can we map these Revit coordinates with forge viewer and draw cutplanes?
If you're following the Forge Viewer tutorial to load the model, then you need to subtract global offsets from endpoints of the room bounding box like below:
var minPt = new THREE.Vector3(x,y,z); //!<<< put your point here
var maxPt = new THREE.Vector3(x,y,z); //!<<< put your point here
var offsetMatrix = viewer.model.getData().placementWithOffset;
var offsetMinPt = minPt.applyMatrix4(offsetMatrix);
var offsetMaxPt = maxPt.applyMatrix4(offsetMatrix);
i have another solution. modify model by hand like cut plane, hide, isolate element to retrieve view you want to show. Then use method var data = viewer.getState() and store that data to your database. then use viewer.restoreState(data) to recall your view.
I am the newbie and have a 3dsmax model that converted to gltf with lots of texture images, after loading with cesium and flyto the model, the white bone loaded first, then it'll take long time to render the texture. I want to show a loading image before all textures rendered. Is there anyway to get the texture rendering states?
If you're using the model directly as a graphics primitive (as opposed to using it on a Cesium Entity), then there's a Model.readyPromise that will tell you when the model has finished loading.
Here's a Sandcastle Demo for Cesium 1.54:
var viewer = new Cesium.Viewer('cesiumContainer');
var scene = viewer.scene;
var model;
var modelUrl = '../../../../Apps/SampleData/models/GroundVehicle/GroundVehicle.glb';
var height = 0.0;
var heading = 0.0, pitch = 0.0, roll = 0.0;
var hpr = new Cesium.HeadingPitchRoll(heading, pitch, roll);
var origin = Cesium.Cartesian3.fromDegrees(-123.0744619, 44.0503706, height);
var modelMatrix = Cesium.Transforms.headingPitchRollToFixedFrame(origin, hpr);
scene.primitives.removeAll(); // Remove previous model
model = scene.primitives.add(Cesium.Model.fromGltf({
url : modelUrl,
modelMatrix: modelMatrix
}));
console.log('Model is loading...');
model.readyPromise.then(function(model) {
console.log('Model loading complete.');
// Zoom to model
var camera = viewer.camera;
var controller = scene.screenSpaceCameraController;
var r = 2.0 * Math.max(model.boundingSphere.radius, camera.frustum.near);
controller.minimumZoomDistance = r * 0.5;
var center = Cesium.Matrix4.multiplyByPoint(model.modelMatrix, model.boundingSphere.center, new Cesium.Cartesian3());
var heading = Cesium.Math.toRadians(230.0);
var pitch = Cesium.Math.toRadians(-20.0);
camera.lookAt(center, new Cesium.HeadingPitchRange(heading, pitch, r * 2.0));
}).otherwise(function(error){
console.error(error);
});
I'm trying to control the camera in the Autodesk Forge Viewer. Setting target and position seems to work fine, but if I try to set rotation or quaternion it do not have any effect.
To get the camera I use the getCamera function and then applyCamera after I have tried to set the parameters.
What I'm trying to achieve is to use the device orientation on a handheld device to rotate the model. Just using alpha and beta to set target is not a smooth experience.
// get camera
var cam = _viewer.getCamera();
// get position
var vecPos = cam.position;
// get view vector
var vecViewDir = new THREE.Vector3();
vecViewDir.subVectors(cam.target,cam.position);
// get length of view vector
var length = vecViewDir.length();
// rotate alpha
var vec = new THREE.Vector3();
vec.y = length;
var zAxis = new THREE.Vector3(0,0,1);
vec.applyAxisAngle(zAxis,THREE.Math.degToRad(alpha));
// rotate beta
var vec2 = new THREE.Vector3(vec.x,vec.y,vec.z);
vec2.normalize();
vec2.negate();
vec2.cross(zAxis);
vec.applyAxisAngle(vec2,THREE.Math.degToRad(beta) + Math.PI / 2);
// add to camera
cam.target.addVectors(vecPos,vec);
_viewer.applyCamera(cam,false);
You need to use the setView() method
_viewer.navigation.setView (pos, target) ;
and may also need to set the up vector to make sure you orient the camera the way you want.
_viewer.navigation.setCameraUpVector (upVector) ;
I'm developing an AR application using WebRTC (webcam access), JSARToolKit (marker detection) and threeJS (3D library).
I want to place 3D objects (exported from Maya using threejs maya exporter) in the center of the detected marker.
This is the code where I load the 3D object using JSONLoader:
// load the model
var loader = new THREE.JSONLoader;
var object;
//var geometry = new THREE.BoxGeometry(1, 1, 1);
loader.load('js/cube.js', function(geometry, materials){
var material = new THREE.MeshFaceMaterial(materials);
object = new THREE.Mesh(geometry, material);
object.position.x -= ***3DobjectWidth/2***;
object.position.y -= ***3DobjectHeight/2***;
object.position.z -= ***3DobjectDepth/2***;
scene.add(object);
});
I need to get width, height and depth of the object to change his position (see 3DobjectWidth ecc).
Any suggestions?
The object size will be placed at geometry.boundingBox. But it has to be generated once.
try this.
geometry.computeBoundingBox();
var bb = geometry.boundingBox;
var object3DWidth = bb.max.x - bb.min.x;
var object3DHeight = bb.max.y - bb.min.y;
var object3DDepth = bb.max.z - bb.min.z;
Geometry has a .center() method. That might be more efficient and simpler if you need to center many meshes based on same geometry. It also calls computeBoundingBox for you.
I´m trying to load some STL files using Three.js. The models are loaded correctly, but there are too many triangles that I would like to merge/smooth.
I had successfully applied smooth loading terrains in other 3D formats, but I can´t do it with the BufferGeometry that results from loading an STL file with the STLLoader.
_
var material = new THREE.MeshLambertMaterial( { ... } );
var path = "./models/budah.stl";
var loader = new THREE.STLLoader();
loader.load( path, function ( object ) {
object.computeBoundingBox();
object.computeBoundingSphere();
object.computeFaceNormals();
object.computeVertexNormals();
object.normalizeNormals();
object.center();
// Apply smooth
var modifier = new THREE.SubdivisionModifier( 1);
var smooth = smooth = object.clone();
smooth.mergeVertices();
smooth.computeFaceNormals();
smooth.computeVertexNormals();
modifier.modify( smooth );
scene.add( smooth );
});
This is what I tried, it throws an error: Uncaught TypeError: smooth.mergeVertices is not a function
If I comment the "mergeVertices()" line, what I get is a different error: Uncaught TypeError: Cannot read property 'length' of undefined in SubdivisionsModifier, line 156.
It seems that the sample codes I´m trying are outdated (this is happenning a lot recently due to the massive changes in the Three.JS library). Or maybe I´m forgetting something. The fact is that the vertices seems to be null..?
Thanks in advance!
It seems I was looking in the wrong direction: smoothing the triangles has nothing to do with the SubdivisionsModifier... What I needed was easier than that, just compute the vertex BEFORE applying the material, so it can use SmoothShading instead of FlatShading (did I got it right?).
The problem here was that the BufferGeometry returned by the STLLoader has not calculated vertices/vertex, so I had to do it manually. After that, apply mergeVertices() just before computeVertexNormals() and voilà! The triangles dissappear and everything is smooth:
var material = new THREE.MeshLambertMaterial( { ... } );
var path = "./models/budah.stl";
var loader = new THREE.STLLoader();
loader.load( path, function ( object ) {
object.computeBoundingBox();
object.computeVertexNormals();
object.center();
///////////////////////////////////////////////////////////////
var attrib = object.getAttribute('position');
if(attrib === undefined) {
throw new Error('a given BufferGeometry object must have a position attribute.');
}
var positions = attrib.array;
var vertices = [];
for(var i = 0, n = positions.length; i < n; i += 3) {
var x = positions[i];
var y = positions[i + 1];
var z = positions[i + 2];
vertices.push(new THREE.Vector3(x, y, z));
}
var faces = [];
for(var i = 0, n = vertices.length; i < n; i += 3) {
faces.push(new THREE.Face3(i, i + 1, i + 2));
}
var geometry = new THREE.Geometry();
geometry.vertices = vertices;
geometry.faces = faces;
geometry.computeFaceNormals();
geometry.mergeVertices()
geometry.computeVertexNormals();
///////////////////////////////////////////////////////////////
var mesh = new THREE.Mesh(geometry, material);
scene.add( mesh );
});
Than, you can convert it back to BufferGeometry, because it's more GPU/CPU efficient for more complex models:
var geometry = new THREE.Geometry();
geometry.vertices = vertices;
geometry.faces = faces;
geometry.computeFaceNormals();
geometry.mergeVertices();
geometry.computeVertexNormals();
var buffer_g = new THREE.BufferGeometry();
buffer_g.fromGeometry(geometry);
var mesh = new THREE.Mesh(buffer_g, material);
scene.add( mesh )
Happened this issue for me while loading an obj file. If you have a 3d software like 3dsmax:
Open the obj file,
Go to polygons selection mode and select all polygons.
Under the Surface properties panel, click 'Auto Smooth' button.
Export the model back to obj format
Now you won't have to call the functions geometry.mergeVertices() and geometry.computeVertexNormals();. Just load the obj and add to the scene, mesh will be smooth.
EDIT:
My obj files had meshphongmaterial by default and on changing the shading property to value 2 the mesh became smooth.
child.material.shading = 2
STL does not support vertex index.
That is reason it has duplicated vertex of all triangles.
Each vertex has its normal as triangle normal.
As a result, at same position( multiple very closed vertices), there is multiple normal value.
This leads to non-smooth surface of geometry when using Normal for lighting calculation.