HTML5 Canvas: Calculate the Mouseposition after Zooming and Translating - html

I try to develope an interactive viewer for vector drawings and want to have the feature of zooming.
The function for zooming works pretty good but now I have the problem to calculate the mouseposition for picking objects.
The event gives back the screen coordinates. The canvas doesn't have a methode to use the transformation matrix in the inverse way.
Does anyone have a solution to this problem?

I made a very small a simple class for keeping track of the transformation matrix.
I added an invert() function for reasons like this. I also made an invertPoint() function but didn't put it in the final version. It's not hard to deduce though, just invert and transform point together.
I often just calculate the appropraite transform with this class and then use setTransform, depending on the application.
I wish I could give you a more specific solution but without a code sample of what you want that'd be hard to do.
Here's the transformation class code. And here's a blog post with a bit of an explanation.

Here are some valuable functions for your library that preserve the matrix state and needed to build up a scene graph:
Transform.prototype.reset = function() {
this.m = [1,0,0,1,0,0];
this.stack = [];
};
Transform.prototype.push = function() {
this.stack.push(this.m.slice());
};
Transform.prototype.pop = function() {
this.m = this.stack.pop();
};

Related

load 2D & 3D forge viewers in single web page

I would like to link between elements from the 2D sheet and 3D model, so when I select the element from 2D it should reflect and select (isolate) in the 3D also if I change the color it does the same on both e.g. and the other way around.
so I can use the document browser extensions to open the 2d sheet on 1st viewer and the 3d model on the 2nd viewer:
const firstModel = new Autodesk.Viewing.Private.GuiViewer3D(document.getElementById('MyViewerDiv1'));
const secondModel = new Autodesk.Viewing.Private.GuiViewer3D(document.getElementById('MyViewerDiv2'));
Autodesk.Viewing.Initializer(options1, function() {
viewer1.start();
viewer1.load(...);
});
Autodesk.Viewing.Initializer(options2, function() {
viewer2.start();
viewer2.load(...);
});
if the example above is correct I am still missing how to links both viewers.
I hope someone could help me with this issue
Note that we have a viewer extension that might already give you what you're looking for: https://github.com/Autodesk-Forge/forge-extensions/blob/master/public/extensions/NestedViewerExtension/README.md.
If you want to implement the cross-selection between two viewer instances yourself, you can. Just subscribe to the SELECTION_CHANGED event in one of the viewers, get the selected IDs, and select the same IDs in the other viewer using the usual viewer.select([...]); method.
Btw. regarding your code snippet:
the Autodesk.Viewing.Initializer only needs to be called once per the entire webpage
the Autodesk.Viewing.Private.GuiViewer3D instances should be created after the initializer has done its work

Autodesk Forge Viewer getting fragment position

I'm trying to get the position of separate meshes in a model (translated from a revit file).
What I'm doing is to get fragmentProxy, then use getOriginalWorldMatrix() to get the THREE.Matrix4(). Then from the Matrix4, call getPosition() to get the THREE.Vector3 world position of the fragment.
However, every mesh returns the same position value. Is that because of how the model is built originally? Or I have to get the fragment position using a different method?
Your process of retrieving the fragment transform is correct. Alternatively, you could use something like this:
function getFragmentTransform(model, fragid) {
const frags = model.getFragmentList();
let xform = new THREE.Matrix4();
frags.getOriginalWorldMatrix(fragid, xform);
return xform;
}
I'm afraid you are correct that, in some cases, the transform may be baked directly into the mesh vertices.

THREE.js - morphTargetInfluences on an imported JSON mesh not getting results

I have a basic three.js scene in which I am attempting to get objects exported from Blender (as JSON files with embedded morphs) to function and update their shapes with user input.
Here is a test scene
http://onthez.com/temphosting/three-js-morph-test/morph-test.html
The slab is being resized without morphs by simply scaling a box, which is working just fine.
I must be missing something fundamental with the little monument on top. It has 3 morphs (width, depth, height) that are intended to allow it to resize.
I am using this code to implement the morph based on users dat.gui input.
folder1.add( params, 'width', 12, 100 ).step(1).name("Width").onChange( function () {
updateFoundation();
building.morphTargetInfluences['width'] = params.width/100;
roofL.morphTargetInfluences['width'] = params.width/100;
roofR.morphTargetInfluences['width'] = params.width/100;
building.updateMorphs();
});
The materials for building, roofL, and roofR each have morphTargets set as true.
I've been going over the three.js examples here:
http://threejs.org/examples/?q=morph#webgl_morphtargets_human
as well as #webgl_morphtargets and #webgl_morphtargets_horse
Any thoughts or input would be much appreciated!
I believe I've reached a solution for my question I was under the impression that the JSON loader was preserving the morph target names to be used in place of an index number with morphTargetInfluences
something like morphTargetInfluences['myMorphTargetName']
but, after closer inspection in the console it seems like they should be referred to by number like morphTargetInfluences[0]
Not the most intuitive, but I can work with it.

Unable to display anything through Canvas

I am trying to build a highlighting library with JavaScript and jQuery. I am just diving into Canvasing techniques this week and did not find them to be all that difficult. However, while working today my code has simply stopped working. I know I am probably just missing something obvious but I have been stuck like this for almost 2 hours now and I need to get this project moving forward again. any help would be greatly appreciated.
$(function() {
$('area').click(function(event) {
event.preventDefault();
document.getElementById("ctx").getContext("2d").fillStyle = "#FF0000";
document.getElementById("ctx").getContext("2d").fillRect(0, 0, 200, 200);
} );
} );
I have included my Javascript only since that is the only thing I have been changing recently.
Your code works for me, assuming:
The page has a clickable area.
The page has a canvas with an #id of ctx.
Make sure those 2 things are true about your setup...
Does your canvas element have an #id of ctx? That's not fatal, but the canvas element contains a context so it's a bit misleading.
If you have a canvas element like this:
<canvas id=canvas></canvas>
Then you can get a reusable reference to the canvas's context like this:
// no need to constantly get a context reference ...
// just do it once at the start of your app
var canvas=document.getElementById('canvas');
var context=canvas.getContext('2d');
And you can reuse that context reference to do all your drawing calls:
context.fillStyle='red';
context.fillRect(0,0,200,200);

AS3: How to supplement function calls in an existing library?

I have a semi-newbie question. I've been programming for years, but all my early experience was pre-OOP and my brain kind of settled that way. I'm also new to Actionscript. So hopefully this is an easy one for somebody.
I'm using as3svgrendererlib to import SVG. It works great, but I need to be able to serialize the graphics it outputs. But I can't serialize sprites, so I have to go all the way down to the IGraphicsData level to get something that I can. But the library doesn't give me that data. It only gives me sprites. So I need to change that.
Since there are only a handful of drawing methods that it ultimately uses (beginFill, drawRect, etc), my thinking is that if I can hook into those and supplement them with my own code to output IGraphicsData as well, then I'll be in business. Now I know I could do that by using "extends" classes, but that would require substantial modification of the library to change all of those standard calls to my custom ones.
So I'm wondering: Is there a magic OOP way to write methods that will universally intercept calls to existing methods without needing to modify the original calls?
Thanks! :)
EDIT: I need resolution-independence, so it's important that I keep the graphics in vector and not convert them to bitmap.
You cannot do this kind of thing in OOP, you either need to override the class (but that might not be possible in your case) or modify the library directly.
However, in your case, a possible solution would be to:
Draw the SVG to a sprite using the library.
Draw the sprite to a BitmapData.
Finally, get the pixel data using getPixels() and serialize it.
Something like this should work;
var sprite:Sprite = new Sprite();
// Add the child to the stage...
// Draw the SVG to the sprite...
var bmpData:BitmapData = new BitmapData(spriteWidth, spriteHeight);
bmpData.draw(sprite);
var pixelData:ByteArray = bmpData.getPixels(new Rectangle(0, 0, bmpData.width, bmpData.height));
// Here serialize the byte array
In this example, note that spriteWidth/spriteHeight are not necessarily "sprite.width" and "sprite.height" (sprites often report dimensions different from what you would expect). So you need to decide in advance the size of the rendered SVG and use this when building the BitmapData.