Convert Autodesk Viewer Units to Inches - autodesk-forge

I am using the viewer with the Edit2D library and am trying to convert the length between two x and y points into real measurements.
For example, after a shape is drawn using the polygon tool, I want to get the length of the first edge.
I get the drawn shape and the first two points on the event shown below, get 2 points, and get the distance between them. It seems they are in Autodesk Units or something. Is there an easy way to convert the units to feet or inches?
I have found
Edit2DExtension.defaultContext.unitHandler.fromDisplayUnits()
as well as
Edit2DExtension.defaultContext.unitHandler.toDisplayUnits()
and also
Autodesk.Viewing.Private.convertUnits().
I've tried all three, but am unsure how to use them and haven't found any good results with them yet.
There may be a way to do it through Edit2d but I haven't found a way yet and there is next to no documentation I can find on this library.
beforeEdit2DAction(event) {
console.log('After Shape has been drawn -> ', event);
let shape = event.action.shape;
let pointA = shape._loops[0][0]; // Value: {x: 21.393766403198242, y: 20.934386880096092}
let pointB = shape._loops[0][1]; // Value: {x: 25.082155227661133, y: 20.934386880096092}
// Distance between 2 points (Assuming Autodesk units)
let length = Autodesk.Edit2D.Math2D.distance2D(pointA, pointB); // 3.6883888244628906
// Need to convert to real world units (preferably ft or inches)
}
The real length is 29.5 FEET
Any ideas, or comments are welcome! Thanks
Edit: Trying Petr's suggestion here's what it returned:

That's an interested question. The "unit handler" keeps track of two types of units:
layer units (Edit2DExtension.defaultContext.unitHandler.config.layerUnits, can be inch for example)
display units (Edit2DExtension.defaultContext.unitHandler.config.displayUnits)
These two properties control how the actual lengths and areas are displayed. For example, the unit handler's toDisplayUnits method is implemented like so:
toDisplayUnits(fromUnits, value) {
this.updateConfig();
return Autodesk.Viewing.Private.convertUnits(fromUnits, this.config.displayUnits, this.config.scaleFactor, value);
}
With that, configuring fromUnits and displayUnits (and scale) properly should give you the real measurements you need.

Related

Mapping Nebraska school districts with D3 v4 - base layer not showing

I am having trouble mapping Nebraska school districts in D3 (v4). (See bl.ock here.) I can map Nebraska counties no problem, but the same code modified for school districts--and pointing to a school district TopoJSON file--gives me a blank page.
Here's how I created the JSON, based on Mike Bostock's excellent instructions :
curl "https://www2.census.gov/geo/tiger/GENZ2017/shp/cb_2017_31_unsd_500k.zip" -o cb_2017_31_unsd_500k.zip
unzip -o cb_2017_31_unsd_500k.zip
shp2json cb_2017_31_unsd_500k.shp -o ne_district.json
ndjson-split "d.features" < ne_district.json > ne_district.ndjson
ndjson-map "d.id = d.properties.GEOID, d" < ne_district.ndjson > ne_district-id.ndjson
geo2topo -n districts=ne_district-id.ndjson > ne_district-id-topo.json
And here's my projection:
var projection = d3.geoConicConformal()
.parallels([40, 43])
.rotate([100, 0])
.scale(8000);
Thanks for your help and apologies in advance for anything important I left out!
The issue is you haven't finished setting your projection parameters. You have rotate the map, which is how you should center a conic projection along the x axis. But you haven't centered the map on the y axis, it is centered on the equator. You
For a conical projection, you can do this one of three ways:
Center the map on a central latitude : projection.center([0,y])
You don't need to use .center with an x value because the map is already centered on the x by rotation, rotation and centering are cumulative
Rotate the map to a central latitude and longitude: projection.rotate([-x,-y])
On a conical projection the rotation on the meridian does not warp the map (generally), we rotate by the negative as we move the earth under us. This option does slightly distort the map relative to the other options - this may be preferrable.
Use the projection translation to center the map
The easiest way is to translate the result while automatically scaling (though you can do this manually too) with projection.fitSize or projection.fitExtent. These methods modify projection.scale and projection.translate. As with centering with .center, you need to keep your rotation - otherwise you'll get an odd tilt to the map.
These methods set translate and scale to appropriate values so that your map area contains the desired features:
var featureCollection = topojson.feature(ne, ne.objects.districts);
projection.fitSize([width,height],featureCollection);
These methods must take objects, not arrays, so we use the featureCollection, not the features as an array
Both methods take an array specifying the size to stretch a provided geojson object over:
projection.fitSize([mapwidth,mapheight],geojsonObject)
projection.fitExtent([[left,top],[right,bottom]],geojsonObject)
Here's an updated gist using fitSize.

Converting altitude to z-level (and vice versa)

When using ol3-cesium and the map is in 3d mode, calling map.getView().getZoom() returns undefined. This might affect setZoom as well.
I understand we are in a 3d world, so there are no z-levels as in the tiled maps. On the other hand, Google Maps calculates a z-equivalent when coming back grom 3d to 2d.
How can I convert from height to a z-equivalent? Any formula, taking into account the latitude and altitude, to get the z equivalent?
There's no easy formula to get a 2D "Z" value from 3D, because the 3D camera can be tilted, can see different levels of tiles in the foreground vs the background, etc.
For individual tiles however, there are specific known "Level" values from the imagery quadtree. You can see these in Cesium Inspector by clicking the little + next to the word Terrain on the right side, and then put a checkmark on Show tile coordinates. The coordinates shown include L, X, and Y, where L is the tile's level (0 being the most zoomed-out, higher numbers more zoomed in), and X and Y are 2D positions within the imagery layer.
I posted an answer on GIS SE that shows how to reach in and grab these tiles, the same way Cesium Inspector does, along with a number of caveats involved. You could potentially look for the highest-level visible tile, and use that as your "Z" value.
I know this is not accurate, but sharing in case this is of use to anyone.
I have moved to several altitudes in Google Maps, switching between the 2D and 3D maps, writing down the z or altitude shown in the address bar:
z altitude (metres)
----- -----------------
3 10311040
4 5932713
5 2966357
6 1483178
7 741589
8.6 243624
11.35 36310
13.85 6410
15.26 2411
17.01 717
18.27 214
19.6 119
20.77 50
21 44
With the above correspondences, I have approximated the following function:
function altitudeToZoom(altitude) {
var A = 40487.57;
var B = 0.00007096758;
var C = 91610.74;
var D = -40467.74;
return D+(A-D)/(1+Math.pow(altitude/C, B));
}
Based on your formula, the reverse conversion should be:
altitude = C * Math.pow((A-D)/(zoomLevel-D) -1, 1/B);

Best and most performant implementation of dynamic shapes in cesium

I am currently working an application that is using a Cesium Viewer. I need to be able to display a collection of shapes that will be updated dynamically. I am having trouble understanding the best way to do this.
I currently am using Entities and using CallbackProperties to allow for the updating of shapes.
You can through this into a sandcastle to get an idea of how I am doing this. There is a polygon object that is being used as the basis for the cesiumCallback, and it is getting edited by another piece of code. (simulated with the setTimeout)
var viewer = new Cesium.Viewer('cesiumContainer', {});
var polygon = {};
polygon.coordinates = [
{longitude: 0, latitude: 0, altitude: 0},
{longitude: 10, latitude: 10, altitude: 0},
{longitude: 10, latitude: 0, altitude: 0}
];
// converts generic style options to cesium one (aka color -> material)
var polOpts = {};
// function for getting location
polOpts.hierarchy = new Cesium.CallbackProperty(function() {
var hierarchy = [];
for (var i = 0; i < polygon.coordinates.length; i++) {
var coordinate = polygon.coordinates[i];
hierarchy.push(Cesium.Cartesian3.fromDegrees(coordinate.longitude, coordinate.latitude, coordinate.altitude));
}
return hierarchy;
}, false);
viewer.entities.add({polygon: polOpts});
setInterval(function(polygon){
polygon.coordinates[0].longitude--;
}.bind(this, polygon), 1000);
The polygon being passed in is a class that generically describes a polygon, so it has an array of coordinates and style options, as well as a render method that calls this method renderPolygon passing in itself.
This method of rendering shapes works for everything I need it to, but it is not very performant. There are two cases for shapes updating, one type of shape will be updated over a long period of time, as a slow rate like once every few seconds. The other is shapes that will will get updated many times, like thousands, in a few seconds, then not change again for a long time, if ever.
I had two ideas for how to fix this.
Idea 1:
Have two methods, a renderDynamicPolygon and a renderStaticPolygon.
The renderDynamicPolygon method would do the above functionality, using the cesiumCallbackProperties. This would be used for shapes that are getting updated many times during the short time they are being updated.
The renderStaticPolygon method would replace the entities properties that are using callbackProperties with constant values, once the updating is done.
This creates a lot of other work to make sure shapes are in the right state, and doesn't help the shapes that are being updated slowly over a long period of time.
Idea 2:
Similarly to how the primitives work, I tried removing the old entity and adding it again with its updated properties each time its need to be updated, but this resulted in flickering, and unlike primitives, i could not find a async property for entities.
I also tried using primitives. It worked great for polylines, I would simply remove the old one and add a new one with the updated properties. I was also using the async = false to ensure there was no flickering. This issue I ran into here was not all shapes can be created using primitives. (Is this true?)
The other thing I tried was using the geometry instance using the geometry and appearance. After going through the tutorial on the cesium website I was able to render a few shapes, and could update the appearance, but found it close to impossible to figure out how to update the shapes correctly, and also have a very hard time getting them to look correct. Shapes need to have the right shape, a fill color and opacity and a stroke color, opacity and weight. I tried to use the polygonOutlineGeometry, but had not luck.
What would be the best way to implement this? Are one of these options headed the right way or is there some other method of doing this I have not uncovered yet?
[Edit] I added an answer of where I have gotten, but still not complete and looking for answers.
I have came up with a pretty good solution to this, but it still has one small issue.
I made too ways of showing entities. I am calling one render and one paint. Render uses the the Cesium.CallbackProperty with the isConstant property true, and paint with the isConstantProperty false.
Then I created a function to change the an entity from render to paint and vice vera. It goes through the entities callback properties an uses the setCallback property to overwrite the property with a the correct function and isConstant value.
Example:
I create a ellipse based on a circle object I have defined.
// isConst is True if it is being "painted" and false if it is being "rendered"
ellipse: lenz.util.extend(this._getStyleOptions(circle), {
semiMinorAxis: new Cesium.CallbackProperty(
this._getRadius.bind(this, circle),
isConst
),
semiMajorAxis: new Cesium.CallbackProperty(
this._getRadius.bind(this, circle),
isConst
),
})
So when the shape is being updated (while the user is drawing a shape) the shape is rendered with the isConstant being false.
Then when the drawing is complete it is converted to the painted version using some code like this:
existingEntity.ellipse.semiMinorAxis.setCallback(
this._getRadius.bind(this, circle),
isConst
);
existingEntity.ellipse.semiMajorAxis.setCallback(
this._getRadius.bind(this, circle, 1),
isConst
);
This works great performance wise. I am able to draw hundreds of shapes without the frame dropping much at all. I have attached a screen shot of the cesium map with 612 entities before and after my changes, the frame rate is in the upper right using the chrome render tool.
Before: Locked up at fps 0.9
Note: I redacted the rest of the ui, witch makes the globe look cut off, sorry
And after the changes: The fps remains at 59.9, almost perfect!
Whenever the entity is 'converted' from using constant to not constant callback properties, it and all other entities of the same type flash off then on again. I cannot find a better way to do this conversion. I feel as thought there must still be some thing I am missing.
You could try using a PositionPropertyArray as the polygon's hierarchy with SampledPositionProperty for any dynamic positions and ConstantPositionProperty for any static positions. I'm not sure if it would perform any better than your solution, but it might be worth testing. Here is an example of how it might work that you can paste into the Cesium Sandcastle:
var viewer = new Cesium.Viewer('cesiumContainer', {});
// required if you want no interpolation of position between times
var noInterpolation = {
type: 'No Interpolation',
getRequiredDataPoints: function (degree) {
return 2;
},
interpolateOrderZero: function (x, xTable, yTable, yStride, result) {
if (!Cesium.defined(result)) {
result = new Array(yStride);
}
for (var i = 0; i < yStride; i++) {
result[i] = yTable[i];
}
return result;
}
};
var start = viewer.clock.currentTime;
// set up the sampled position property
var sampledPositionProperty = new Cesium.SampledPositionProperty();
sampledPositionProperty.forwardExtrapolationType = Cesium.ExtrapolationType.HOLD;
sampledPositionProperty.addSample(start, new Cesium.Cartesian3.fromDegrees(0, 0)); // initial position
sampledPositionProperty.setInterpolationOptions({
interpolationAlgorithm: noInterpolation
});
// set up the sampled position property array
var positions = [
sampledPositionProperty,
new Cesium.ConstantPositionProperty(new Cesium.Cartesian3.fromDegrees(10, 10)),
new Cesium.ConstantPositionProperty(new Cesium.Cartesian3.fromDegrees(10, 0))
];
// add the polygon to Cesium viewer
var polygonEntity = new Cesium.Entity({
polygon: {
hierarchy: new Cesium.PositionPropertyArray(positions)
}
});
viewer.zoomTo(viewer.entities.add(polygonEntity));
// add a sample every second
var counter = 1;
setInterval(function(positionArray) {
var time = new Cesium.JulianDate.addSeconds(start, counter, new Cesium.JulianDate());
var position = new Cesium.Cartesian3.fromDegrees(-counter, 0);
positionArray[0].addSample(time, position);
counter++;
}.bind(this, positions), 1000);
One nice thing about this is you can set the timeline start/end time to a reasonable range and use it to see your polygon at any time within the sample range so you can see the history of your polygons through time (See here for how to change the timeline start/end time). Additionally, you don't need to use timers to set the positions, the time is built in to the SampledPositionProperty (although you can still add samples asynchronously).
However, this also means that the position depends on the current time in the timeline instead of a real-time array value. And you might need to keep track of a time somewhere if you aren't adding all the samples at once.
I've also never done this using ellipses before, but the semiMinorAxis and semiMajorAxis are properties, so you might still be able to use a SampledProperty.
Of course, this doesn't really matter if there are still performance issues. Hopefully it will improve as you don't need to recreate the array from scratch each callback and, depending on how you're getting the data to update the polygons, you might be able to add multiple samples at once. This is just speculation, but it's something to consider.
EDIT
Cesium can handle quite a bit of samples added to a sampled position, for example in the above code if you add a million samples to the position it takes a few seconds to load them all, but renders the polygon at any time without any performance issues. To test this, instead of adding samples using a timer, just add them all directly to the property.
for (var i = 0; i < 1000000; i++) {
var time = new Cesium.JulianDate.addSeconds(start, i, new Cesium.JulianDate());
var position = new Cesium.Cartesian3.fromDegrees(-(i % 2), 0);
positions[0].addSample(time, position);
}
However, if you run into memory problems currently there is no way to remove samples from a position property without accessing private variables. A work around would be to periodically create a new array containing new position properties and use the previous position property array's setValue() method to clear previous values or perhaps to use a TimeIntervalCollectionProperty as in this answer and remove time intervals with the removeInterval method.

Cesium CZML: using lat long alt

I imagine this is a simple problem for anyone really familiar with Cesium's CZML files. I'm just trying to display a series of lat/long/alt points as a flight path using Cesium. Can someone tell me what the "position" tag should look like?
Unless I'm looking in the wrong places, I don't see a lot of examples for CZML. So it's hard to know what tags can be used and how to use them (and the Java console doesn't show the errors if you get them wrong).
In the Sandcastle CZML example on the Cesium website, the relevant section looks like this:
"position" : {
"interpolationAlgorithm" : "LAGRANGE",
"interpolationDegree" : 1,
"epoch" : "2012-08-04T16:00:00Z",
// Trimmed to just 2 points
"cartesian" : [0.0, -2379754.6637012, -4665332.88013588, 3628133.68924173,
3894.996219574019, -2291336.52323822, -4682359.21232197, 3662718.52171165]
}
If it's two points, why are there 8 values? If it was ECEF coordinates, I would expect only three per point...
For example, when I tried this, I got an "uncaught error" message in the console... which isn't very helpful:
"cartographic" : [-1.472853549, 0.589580778, 1000,
-1.472962668, 0.589739552, 1000 ]
According to the documentation, cartographic takes (long, lat, height) where long and lat are in radians and height is in meters.
The first coordinate is in each set of 4 is time, so it's actually (t, x, y, z). In the example you posted, t is the number of seconds after the specified epoch that the waypoint exists.
You can also use cartographicRadians or cartographicDegrees, but they would still be specified as (t, lon, lat, alt).
If you want to draw a route that is not time-dynamic (i.e. just a static line) you can use the polyline CZML object instead; which has a list of x/y/z positions without time.
Matthews answer is correct, took a littke bit of tweaking to get it working so for others looking at this here is an example showing cartographicDegrees in use.
"position": {
"interpolationAlgorithm": "LAGRANGE",
"interpolationDegree": 1,
"epoch": "2012-08-04T16:00:00Z",
"cartographicDegrees": [
//time, lat, long, alt
0,-116.52,35.02,80,
300,-116.52,35.04,4000,
600,-116.52,35.08,9000,
900,-116.52,35.02,3000,
1100,-116.52,35.02,1000,
1400,-116.52,35.02,100
]
}

How does this work in computing the depth map?

From this site: http://www.catalinzima.com/?page_id=14
I've always been confused about how the depth map is calculated.
The vertex shader function calculates position as follows:
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.TexCoord = input.TexCoord; //pass the texture coordinates further
output.Normal =mul(input.Normal,World); //get normal into world space
output.Depth.x = output.Position.z;
output.Depth.y = output.Position.w;
return output;
}
What are output.Position.z and output.Position.w? I'm not sure as to the maths behind this.
And in the pixel shader there is this line: output.Depth = input.Depth.x / input.Depth.y;
So output.Depth is output.Position.z / outputPOsition.w? Why do we do this?
Finally in the point light shader (http://www.catalinzima.com/?page_id=55) to convert this output to be a position the code is:
//read depth
float depthVal = tex2D(depthSampler,texCoord).r;
//compute screen-space position
float4 position;
position.xy = input.ScreenPosition.xy;
position.z = depthVal;
position.w = 1.0f;
//transform to world space
position = mul(position, InvertViewProjection);
position /= position.w;
again I don't understand this. I sort of see why we use InvertViewProjection as we multiply by the view projection previously, but the whole z and now w being made to equal 1, after which the whole position is divided by w confuses me quite a bit.
To understand this completely, you'll need to understand how the algebra that underpins 3D transforms works. SO does not really help (or I don't know how to use it) to do matrix math, so it'll have to be without fancy formulaes. Here is some high level explanation though:
If you look closely, you'll notice that all transformations that happen to a vertex position (from model to world to view to clip coordinates) happens to be using 4D vectors. That's right. 4D. Why, when we live in a 3D world ? Because in that 4D representation, all the transformations we usually want to do to vertices are expressible as a matrix multiplication. This is not the case if we stay in 3D representation. And matrix multiplications are what a GPU is good at.
What does a vertex in 3D correspond to in 4D ? This is where it gets interesting. The (x, y, z) point corresponds to the line (a.x, a.y, a.z, a). We can grab any point on this line to do the math we need, and we usually pick the easiest one, a=1 (that way, we don't have to do any multiplication, just set w=1).
So that answers pretty much all the math you're looking at. To project a 3D point in 4D we set w=1, to get back a component from a 4D vector, that we want to compare against our standard sizes in 3D, we have to divide that component by w.
This coordinate system, if you want to dive deeper, is called homogeneous coordinates.