In our application, we draw rooms by reading information from an IFC file and then generate custom objects which are added to the model builder. For each vertex, we substract the globalOffset, so that the rooms align nicely with the model. This works perfectly for most models we have. However, for one model, the globalOffset is huge and thus, the custom objects will be drawn far away from the model.
The vertices we read from the IFC file are located in a reasonable space around {0, 0, 0}.
My question now is: How is the globalOffset calculated? What properties of the IFC file are taken into account?
As already stated, the other models work fine when we subtract the globalOffset from each vertex. Here is an example:
Thanks in advance for any form of help!
EDIT:
For everyone interested in the origin of the global offset in the IFC file: search for "ifcsite", there should be a reference to a local placement and this may contain a rather big translation (at least in my case).
The global offset is the mid-point of the model bounding box by default like the below:
var bboxMeta = model.getData().metadata["world bounding box"];
var min = new THREE.Vector3(bbox.minXYZ[0], bbox.minXYZ[1], bbox.minXYZ[2]);
var max = new THREE.Vector3(bbox.maxXYZ[0], bbox.maxXYZ[1], bbox.maxXYZ[2]);
var bbox = new THREE.Box3(min, max);
var globalOffset = bbox.center();
It's used to avoid floating point precision issues for models that are far away from the viewer's origin. By default, Forge Viewer will use this offset to move the whole model to the viewer's origin.
To get the global offset you can also use the following line of code with the same output:
let globalOffset = viewer.model.getData().globalOffset;
Related
This is related to my previous question. I'm posting a new question to try and explain the situation better.
I am placing marker objects on a model using data taken from drone surveys. I have access to high accuracy GPS data and also omega/phi/kappa rotation data.
I am trying to use the Autodesk.Geolocation extension to convert the lon/lat/alt data to viewer space.
All models were originally created in Revit.
When I use the Geolocation extension, it seems like the refPointLMV and GlobalOffset are not correctly being taken into account.
Here's an example:
As you can see, the selected point [0] on the model is nowhere near the real GPS coords. Also, the refPointLMV has huge values.
Something similar happens when I take some lon/lat/alt data from the drone photo. The drone GPS data will be close to the model positionLL84, e.g (4.586577106, 51.626037158, 49.095). However, when I do Geolocation.lonLatToLMV(4.586577106, 51.626037158, 49.095) I get a result way off screen.
We've had a support query open with Autodesk related to this open for over two months now, but not had much success there. They said the engineering team is too busy to work on this and recommended to try and fix the error on our side. Support ref LMV-5261.
I have been able to bring the result of Geolocation.lonLatToLMV into viewer space with the following code:
const gpsPosition = new THREE.Vector3(
longitude,
latitude,
altitude,
);
const position = viewer
.getExtension('Autodesk.Geolocation')
.lonLatToLmv(gpsPosition);
const data = viewer.model.getData();
const globalOffset = data.globalOffset;
const refPointTransform = data.refPointTransform;
// applying the transform
position.add(globalOffset)
position.applyMatrix4(refPointTransform);
// HACK: after applying the above transforms, a final
// rotation of -45 degrees is required to move the points into position
// once this has been done, the locations match up exactly to photos.
// Seems like a weird hack, but I've tested with over 20 drone photos, they all match up.
const quaterion = new THREE.Quaternion().setFromEuler(
new THREE.Euler(0, 0, -Math.PI / 4),
);
position.applyQuaternion(quaterion);
The problem here is that we are testing with a single model and this is clearly not a robust solution that we can expect to work with all future models and drone data we throw at it.
How long is it likely to take for the engineering team to fix this? Or are they likely to fix this at all?
Sorry for the delay due to the Chinese New Year. After checking with our engineering team, the current solution is to do the followings:
Move the Project base point to N0 E0, but remain the angle to true north
Copy the LAT LONG to Survey point
Afterward, the result of the GEO conversion should be expected.
Here are the snapshots of the above setting and the result
The measurement tool of the viewer has calibration tool. It requires that user selects two points in the viewer and define the distance with proper units.
My plan is that I will have the points defined in my model at a fixed distance. I will not need user input for this. How do I add the distance, unit, and size so as to programmatically set the calibration?
Edit: The workaround.
I need that the default units be meters and it should correctly show 1 meter on the model to 1 meter as measured by measurement tool.
For the time being, what I did is -
I manually calibrate the model using calibrate tool to meters by picking two known points in the model.
Then I used this to get the scale factor -
var measureExtension =NOP_VIEWER.getExtension('Autodesk.Measure')
var factor = measureExtension.getCalibrationFactor()
(I used the above code lines in the developer console of the browser while interacting with the viewer simultaneously.)
which gave me this value factor = 0.039369.
I am adding this scale factor in my code once the model is loaded again.
measureExtension.calibrateByScale('m', 0.039369)
This seems to solve the issue for the models that I have with me.
I know this will break once I have some different model with different default units. Please let me know if someone has a better solution.
I'm taking a quick guess by looking at the viewer3D.js source:
var measureExt = viewer.getExtension('Autodesk.Measure')
// pick from available values:
// 'decimal-ft'
// 'ft-and-fractional-in'
// 'ft-and-decimal-in'
// 'decimal-in'
// 'fractional-in'
// 'm'
// 'cm'
// 'mm'
// 'm-and-cm'
measureExt.calibrate('decimal-in', 10)
I have run the cyclone case from the OpenFOAM tutorials and want to view it using the builtin paraFOAM viewer which is based on Paraview 5.4.0.
The simulation has a number of particles in the diameter range of [2e-5, 1e-4] and i would like to scale the size of particles with the diameter array provided with the results.
To do this i select the Point Gaussian representation for the lagrangian fields (kinematiccloud), select Advanced properties, and select 'Scale by data array' after which the diameter array is chosen by default (although its not possible to change it to another field, which I suspect is a bug) but all the particles disappear from the view, as can be seen in the following screenshot:
My guess is that i need to chose proper values of the Gaussian radius and for the scale transfer function but there is no documentation to which it should be set. I have tried trial-and-error but i cannot find any settings for which i can get the particles back and have them render at different sizes.
Can someone enlighten me on how to set the Gaussian radius and scale transfer function properly?
The PointGaussian has just been improved and configuration is now automatic. You may want to try the last release of ParaView.
More info here :
https://blog.kitware.com/major-improvements-on-the-point-gaussian-representation/
I'm using AS3, but general programming wisdom unspecific to AS3 is great too!
I am creating my first game, a top-down dungeon crawler with tile-based navigation, and I am deciding how to store my maps. I need to be able to access a specific tile at any point in time. My only thought so far is to use nested Vectors or Arrays with the first level being the row and the second being the column, something like this:
private var map:Array = new Array(Array(0,1,0,0,1,1,0),Array(0,1,0,1,0,1,0));
private var row2col3:uint = map[1][2];
/*map would display as such:*/
#|##||#
#|#|#|#
Ultimately, the idea is to build a Map class that will be easily extensible and, again, allow free access to any specific tile. I am looking for help in determining an effective/efficient design architecture for that Map class.
Thanks!
As stated in the comments I would upload and give my source code for a 12 hour challenge project to create a tile based level editor. The source code can be found at: GitHub BKYeates
This level editor focuses on textures being a power of 2, and uses blitting for drawing on all the textures. It can read, write, and store partial tiles. There is also some functionality to erase and draw on collision boxes.
Now in regards to how the storage should be setup, it is really up to you. If you are going to be storing lots of information I recommend using Vectors. Vectors perform faster than most other container types except for ByteArray (if used correctly). In my level editor I used a Vector with a particular setup.
The Vector I used named _map in a class called tilemodel. tilemodel is responsible for updating all the storage information when a change is made. The _map variable is setup like so:
_map = new Vector.<Vector.<Vector.<Object>>>();
This is a pretty heavily nested Vector and in the end stores, can you believe it, an Object! Which admittedly really chunks out the performance gains you get from using Vector when you are indexing the furthest nested elements.
But ignore that because the indexing gain from this setup is really key. The reason it is setup this way is because I can reference a layer, a row, and a column to grab a specific tile object. For example, I have a tile on layer 2 in row 12 column 13 that I want to access:
var tileObject:Object = _map[2][12][13];
That works perfectly for pretty much any scenario I could use in my tile based game, and the speed is comparatively better than that of a Object or Dictionary when this is being accessed multiple times (i.e. - in a loop which happens often).
The level editor is designed to use all blitting and leave onus to my management classes for storage. The speed gain from doing this is very high, and the way it is currently setup the tilemodel can store partial bitmaps making it slightly more flexible than your standard rigidness of a power of 2 texture reader.
Feel free to look through the source code. But here is a summary of what some of the classes do:
tilecontroller - Issues state changes and updates to tilemanager and tilemodel
tilemanager - Responsible for texture drawing and removal.
tilemodel - Stores and updates the current map on state changes.
r_loader - Loads all assets from assetList.txt (paths set to images
there).
hudcontroller - Currently this was the last thing I was working on, lets you draw on collision boxes that are stored in a separate file alongside the map.
g_global & g_keys - Global constants and static methods use
ubiquitously
LevelEditor - Main class, also designed as "View" class ( see MVC pattern: MVC Pattern )
Also as I've mentioned it can read back all the storage. The class used for that I did not upload to GitHub, but figured I would show the important method here:
//#param assets needs to be the list of loaded bitmap images
public function generateMap( assets:* ):void {
var bmd:BitmapData = new BitmapData( g_global.stageWidth, g_global.stageHeight, true, 0 );
_canvas = new Bitmap( bmd, "auto", true );
_mapLayer.addChild( _canvas );
_canvas.bitmapData.unlock();
g_global.echo( "generating map" );
var i:int, j:int, m:int;
for ( m = 0; m < _tiles.length; m++ ) {
for ( i = 0; i < _tiles[m].length; i++ ) {
for ( j = 0; j < _tiles[m][i].length; j++ ) {
//wondering why im type casting in this evaluation? _tiles[i][j].tile == int( _tiles[i][j].tile )
//the level editor stores tiles that are larger than the grid size at indices containing values that are a percent of the tile size
var tile:Object = _tiles[m][i][j];
if ( tile != null && int( tile.tile ) == tile.tile ) {
addTile( g_global.GRIDSIZE * tile.column, g_global.GRIDSIZE * tile.row, { index:tile.tile, bitmap:assets[ tile.tile ] }, tile.rotation );
}
}
}
}
_canvas.bitmapData.lock();
}
Anyway I hope this information finds you well. Good luck!
I asked a similar question a while back: https://gamedev.stackexchange.com/questions/60433/is-it-more-efficient-to-store-my-tile-grid-as-a-dictionary-or-an-array. I'm not sure that it would really matter whether it's an Array or a Vector (the differences in efficiency seem to differ between FP versions, etc.).
But, yeah, you probably want to use one or the other (not a Dictionary or anything), and you probably want to index it like [y * width + x], not [x][y]. Reasons: Efficiency and not having overly complicated data structures.
Also if you need to be able to regularly access the Array or Vector outside of that class, just make the variable internal or public or whatever; making it private and wrapping over it with functions, while being more prim-and-proper class design, would still be overkill.
One method I am using right now for my own thing is that I'm storing my tiles in a black and white pixel bitmap (and wrote a wrapper around that). I'm not sure how efficient this is overall as I've never benchmarked it and just wrote it quickly to create a map for testing purposes, but I am finding that it does offer an advantage in that I can draw my maps in an image editor and view them easily while still allowing random pixel/tile access.
Looking at your sample code, I'm guessing you have only two types of tiles right now, so you could just use black and white pixels as well if you want to try it.
I've done the 2d array method as well (using it still actually for other parts) which works fine too, but perhaps can be harder to visualise at larger sizes. Looking forward to Bennett's answer.
I've wrote a little wavefront's .obj file parser (3d model format), I'm able to display the geometry correctly but am having problems texturing it correctly.
The only way I'm able to get a correct texture is by dividing the model in my 3d editor, exporting and parsing it this way.. ie: I'm no longer sharing vertex data, each triangle is on it own so my indexBuffer's array looks like this [0,1,2,3,4,5,6...] which I want to avoid.
The correct texture/inefficient geometry (No reusing of vertices: 36 vertices):
Correct http://imageshack.us/a/img29/2242/textureright.jpg
Wrong texture/right topology (Sharing data: 8 vertices only = efficient):
Wrong http://imageshack.us/a/img443/6160/texturewrong.jpg
I thought to try and separate the UVs buffer from the indexBuffer destined to the vertices but didn't found a way to do it; if indeed it is doable.
I also messed with the agal code but haven't achieved any results.
The desired end is being able to pass different UVs coordinates to the same vertex in context of the triangle being drawn atm.
What to do?
Thanks. (I'm new to 3d programming)
It might seem like you need just one vertex per 'vertex location' of your model but, from what I understand of an .obj parser, you need to define your vertices around the FACES. This means you may have multiple vertices for some locations - depending on how many faces adjoin that location - but the pay off is you can have different UV coordinates for those vertices in the same location.
I'd suggest altering your parser to create vertices based on the faces they define rather than solely their positions. I know this bumps up the number of vertices but, from what I've read, it's unavoidable if you need different UVs for the same vertex location.
So, unfortunately, I'm pretty sure your first option is the way to go.
it seems like your welding operation is wrong. For welding vertices you must be sure that positions, UV-coordinates, normals and tangents(if you need them) are equal