My company is using the latest version that supports multiple models for federation, the problem that we are facing is that sometimes the models don't quite line up correctly. I'm aware of the load option globalOffset but even with that in place, they don't line up.
I'm therefore looking for a way to move the model after it's been loaded, so that I can then store this new offset in the database, so that it loads correctly next time.
Is this possible at the moment?
If you models haven't set up with co-origin or in share coordinates before, then they won't be aligned with globalOffset option.
And yes, the model can be moved after loaded. You can check out this awesome extension, Viewing.Extension.Transform, written by our cool colleague Philippe and the translation tool is here.
Here is a sample showing how to move the whole model -100 units in the x-direction. Its key concept is applying your model offsets to each Forge fragments as below code snippet.
const fragCount = viewer.model.getFragmentList().fragments.fragId2dbId.length;
// Move whole model -100 units in the x-direction
const offset = new THREE.Vector3( -100, 0 , 0 );
for( let fragId = 0; fragId < fragCount; ++fragId ) {
const fragProxy = viewer.impl.getFragmentProxy( model, fragId );
fragProxy.getAnimTransform();
const position = new THREE.Vector3(
fragProxy.position.x + offset.x,
fragProxy.position.y + offset.y,
fragProxy.position.z + offset.z
);
fragProxy.position = position;
fragProxy.updateAnimTransform();
}
viewer.impl.sceneUpdated( true );
Related
I am testing the possibilities of the forge V7 viewer on a web browser with a revit model.
The idea of the test is:
display 2 viewers simultaneously with 1 3D view and a 2D view
click on a point of the 2D view and
display a sphere in the 2D view on this point
display another sphere in the 3D view at the equivalent point
the spheres are added with SceneBuilder (to allow them to be selected later)
I tried to follow the behavior of
https://forge.autodesk.com/ja/node/1765 and
https://github.com/Autodesk-Forge/viewer-navigation.sample
and i have questions
1 - the 2D view must be a sheet, using a 2D view directly (a floor view for example) does not allow the calculation, is that correct?
2 - https://forge.autodesk.com/ja/node/1765 seems to have calculation and/or precision issues (cf forge1.png generated from the test in Heroku and forge2.png generated by my program)
3 - the method only works if you click on an object
how can I retrieve the coordinates if I click in an empty area?
4 - the spheres are identified in the viewer (cf forge3.png), but
a - how to give them a name (and not "object")
b - how to replace the name "model" by the name of the scene?
Can you help me ?
Thanks in advance
Luc
Here is my code
// click listener on 2d viewer
function listenerScene(ev){
var intersection = viewer2d.hitTest(ev.offsetX, ev.offsetY);
if (intersection) {
AddItem(intersection.intersectPoint.x, intersection.intersectPoint.y,
intersection.intersectPoint.z);
AddItem3d(intersection);
}
// add in 2d
async function AddItem(x,y,z){
... add a sphere at (x,y,z)
}
// add in 3d
async function AddItem3d(intersection) {
const worldPos = sheetToWorld(intersection.intersectPoint,
viewer2d.model,viewer.model);
if (worldPos) {
... add a sphere at worldPos
}
}
// compute 3d point
function sheetToWorld(sheetPos, model2d, model3d) {
const viewportExt = viewer2d.getExtension('Autodesk.AEC.ViewportsExtension');
const viewport = viewportExt.findViewportAtPoint(model2d, new THREE.Vector2(sheetPos.x, sheetPos.y));
if (!viewport) {
return null;
}
const sheetUnitScale = model2d.getUnitScale();
const globalOffset = model3d.getData().globalOffset;
const matrix = viewport.get2DTo3DMatrix(sheetUnitScale);
var worldPos = sheetPos.clone().applyMatrix4(matrix);
worldPos = worldPos.sub(globalOffset);
return worldPos;
}
Looking for a way to make a regular speech bubble in my website's FabricJS canvas. Now before you flag this post, I did see this question, it just has no proper answers and is designed for WordPress so it's not particularly of any use to me.
What I'm wanting is pretty clear: A speech bubble with text in it and a tail/handle that you can drag to point it to something.
I've found this library but I can't seem to get it to show up in my FabricJS canvas? If you could either explain to me how to add this library into my canvas or provide another way of making speech bubbles, that would be sublime.
I dug a bit into Fabric.js and managed to create a procedual speech bubble, but I'm not able to quickly convert it into a Fabric.js class (which would make sense if you want to have multiple speech bubbles on your canvas). Maybe it's still helpful for you or someone else https://codepen.io/timohausmann/pen/poywXzg
It basically creates a Textbox and based on the bounding box of the text updates the position of the Rect around it.
var bound = textbox.getBoundingRect();
rect.left = bound.left - boxPadding;
rect.top = bound.top - boxPadding;
rect.width = bound.width + (boxPadding*2);
rect.height = bound.height + (boxPadding*2);
For the tail I simply created a transparent Rect that you can drag around and use its coordinates to draw a polygon with three points between the "handle" and the textbox center [A]. To make sure the tail maintains a certain width no matter the position, I calculate the degree between handle and the speech bubble center [B]. To keep the position of textbox and handle in sync, I calculate how much textbox moved and simply add the difference to the handles position [C].
//calculate degree between textbox and handle [B]
var angleRadians = Math.atan2(handle.top - textbox.top,
handle.left - textbox.left);
var offsetX = Math.cos(angleRadians + (Math.PI/2));
var offsetY = Math.sin(angleRadians + (Math.PI/2));
//update the polygon [A]
poly.points[0].x = handle.left;
poly.points[0].y = handle.top;
poly.points[1].x = textbox.left - (offsetX * arrowWidth);
poly.points[1].y = textbox.top - (offsetY * arrowWidth);
poly.points[2].x = textbox.left + (offsetX * arrowWidth);
poly.points[2].y = textbox.top + (offsetY * arrowWidth);
//update the handle when the textbox moved [C]
if(textbox.left !== textbox.lastLeft ||
textbox.top !== textbox.lastTop) {
handle.left += (textbox.left - textbox.lastLeft);
handle.top += (textbox.top - textbox.lastTop);
handle.setCoords();
}
Disclaimer: I'm not a Fabric.js expert, maybe there are a few shortcuts possible with the library.
The answer by #Til Hausmann works nicely (thanks!).
I run into some problems when I tried to store and load the canvas data (via canvas.toJSON and canvas.loadFromJSON, resp.), though.
After some fiddling around, this could be resolved by
storing lastLeft and lastTop for both polygons in the updateBubble() method:
poly.lastLeft = Math.min(handle.left, textBox.left);
poly.lastTop = Math.min(handle.top, textBox.top);
setting the left / top properties for the polygons after the data were loaded:
canvas.loadFromJSON(jsonData, () => {
const poly = // ...
const poly2 = // ...
poly.left = poly.lastLeft;
poly.top = poly.lastTop;
poly2.left = poly2.lastLeft;
poly2.top = poly2.lastTop;
// ...
// Important:
canvas.renderAll();
});
passing the full set of shape properites to canvas.toJSON()
canvas.toJSON(
['lastLeft', 'lastTop'].concat(
Object.keys(handleProperties),
Object.keys(polyProperties),
Object.keys(poly2Properties),
Object.keys(textRectProperties)
))
I was surprised that step (3) is actually necessary, but it didn't work without it...
Recently, I am using the plugin "hole filling" of OpenFlipper, and have entirely compiled the OpenFlipper. However, the new mesh has a large number of duplicate vertices, when I tried to add the filling patch to the original mesh. I used the following codes to perform the adding operation:
// filling_patch: newly created filling mesh
// mesh_ori: the original mesh before hole filling
class MeshT::FaceHandle fh;
class MeshT::FaceIter f_it, f_end;
class MeshT::FaceVertexIter fv_it;
for(f_it = filling_patch->faces_begin(), f_end = fill_patch ->faces_end(); f_it != f_end; f_it++)
{
// ith face
fh = *f_it;
// Check whether it is valid
if(!fh.is_valid())
{
return;
}
// Store its three vertices
std::vector<class MeshT::VertexHandle> face_vhandles;
face_vhandles.clear();
// Iterate each vertex of this face
for(fv_it = mesh_ori->fv_iter(fh); fv_it.is_valid(); fv_it++)
{
// Get the 3D point
class MeshT::Point p = filling_patch->point(*fv_it);
// Add this point to original mesh. Note: vh is a new vertevHandle, differ to *fv_it
class MeshT::VertexHandle vh = mesh_ori->add_vertex(p);
face_vhandles.push_back(vh);
}
// Save the face to mesh
mesh_ori->add_face(face_vhandles);
}
So, I am not sure whether there is an existing function that can be used to fix this problem in OpenMesh.
Does someone give me some advice?
Thanks a lot.
In the following example, there is a function called generateTexture().
Is it possible to draw text (numbers) into the pixel array? Or is it possible to draw text (numbers) on top of that shader?
Our goal is to draw a circle with a number inside of it.
https://forge.autodesk.com/blog/using-dynamic-texture-inside-custom-shaders
UPDATE:
We noticed that each circle can't use a unique generateTexture(). The generateTexture() result is used by every single one of them. The only thing that can be customized per object is the color, plus what texture is used.
We could create a workaround for this, which is to generate every texture from 0 to 99, and to then have each object choose the correct texture based on the number we want to display. We don't know if this will be efficient enough to work properly though. Otherwise, it might have to be 0 to 9+ or something in that direction. Any guides on our updated question would be really appreciated. Thanks.
I am able to successfully display text with the following code, simply replace generateTexture() by generateCanvasTexture() in the sample and you should get the result below:
const generateCanvasTexture = () => {
const canvas = document.createElement("canvas")
const ctx = canvas.getContext('2d')
ctx.font = '20pt Arial'
ctx.textAlign = 'center'
ctx.textBaseline = 'middle'
ctx.fillText(new Date().toLocaleString(),
canvas.width / 2, canvas.height / 2)
const canvasTexture = new THREE.Texture(canvas)
canvasTexture.needsUpdate = true
canvasTexture.flipX = false
canvasTexture.flipY = false
return canvasTexture
}
It is possible but you would need to implement it yourself. Shaders are a pretty low level feature so there is no way to directly draw a number or a text, but you can convert a given character into its representation as a 2d pixel array.
I wish to use Cluster Force Layout as described by Mike here: https://bl.ocks.org/mbostock/7882658
The example works fine for me, however, the problem is when I change the data source to a JSON file, which uses a different cluster name, things stop working. No errors, but does not display.
The goal is to group the names in each division into a cluster.
The JSON file is in the html... not sure if you can upload data for jsfiddle
Any direction here much appreciated.
Fiddle: https://jsfiddle.net/xbme6ekf/
This is where I try to recreate the nodes. The nodes appear in console.log, but never make it to the screen.
var nodes = d3.json("/r.json", function(error, data) {
for (var i = 0; i < data.length; i++) {
var obj = data[i];
for (var key in obj){
var rating = obj['rating']; // rating
var r = rating * 20; // radius
var n = obj['name']; // name
var div = obj['division']; // division
// d = {cluster: div, radius: r, name: n, division: div, rating: rating};
d = {cluster: div, radius: r};
// console.log(key+"="+obj[key]);
}
if (!clusters[i] || (r > clusters[i].radius)) clusters[i] = d;
// console.log(d);
}
return d;
});
Thanks
Kevin
First, you can use plunkr to add json file if you want to play with extra data file.
Second, I copied your code from fiddle to plunkr with json file, the console.log(nodes) didn't print out the data, it's because the code retrieving data here:
var nodes = d3.json("/r.json", function(error, data) {...})
is not exactly the same as in the example, because this is asynchronous request, so this line of code won't work:
var force = d3.layout.force()
.nodes(nodes) // data for nodes is not retrieved yet
Third, after I put d3 code into the request callback, there is circles in svg but not visible, I think it is because of the svg size setting (width, height) is not quite fit for the cx, cy of the circles, so I changed the svg to a smaller size, and it's visible. It depends on what you want to achieve at the end, but adjusting the position parameter for the circles can be helpful.
Working plunkr here. Hope this can help.