How does this work in computing the depth map? - google-maps

From this site: http://www.catalinzima.com/?page_id=14
I've always been confused about how the depth map is calculated.
The vertex shader function calculates position as follows:
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.TexCoord = input.TexCoord; //pass the texture coordinates further
output.Normal =mul(input.Normal,World); //get normal into world space
output.Depth.x = output.Position.z;
output.Depth.y = output.Position.w;
return output;
}
What are output.Position.z and output.Position.w? I'm not sure as to the maths behind this.
And in the pixel shader there is this line: output.Depth = input.Depth.x / input.Depth.y;
So output.Depth is output.Position.z / outputPOsition.w? Why do we do this?
Finally in the point light shader (http://www.catalinzima.com/?page_id=55) to convert this output to be a position the code is:
//read depth
float depthVal = tex2D(depthSampler,texCoord).r;
//compute screen-space position
float4 position;
position.xy = input.ScreenPosition.xy;
position.z = depthVal;
position.w = 1.0f;
//transform to world space
position = mul(position, InvertViewProjection);
position /= position.w;
again I don't understand this. I sort of see why we use InvertViewProjection as we multiply by the view projection previously, but the whole z and now w being made to equal 1, after which the whole position is divided by w confuses me quite a bit.

To understand this completely, you'll need to understand how the algebra that underpins 3D transforms works. SO does not really help (or I don't know how to use it) to do matrix math, so it'll have to be without fancy formulaes. Here is some high level explanation though:
If you look closely, you'll notice that all transformations that happen to a vertex position (from model to world to view to clip coordinates) happens to be using 4D vectors. That's right. 4D. Why, when we live in a 3D world ? Because in that 4D representation, all the transformations we usually want to do to vertices are expressible as a matrix multiplication. This is not the case if we stay in 3D representation. And matrix multiplications are what a GPU is good at.
What does a vertex in 3D correspond to in 4D ? This is where it gets interesting. The (x, y, z) point corresponds to the line (a.x, a.y, a.z, a). We can grab any point on this line to do the math we need, and we usually pick the easiest one, a=1 (that way, we don't have to do any multiplication, just set w=1).
So that answers pretty much all the math you're looking at. To project a 3D point in 4D we set w=1, to get back a component from a 4D vector, that we want to compare against our standard sizes in 3D, we have to divide that component by w.
This coordinate system, if you want to dive deeper, is called homogeneous coordinates.

Related

Convert Autodesk Viewer Units to Inches

I am using the viewer with the Edit2D library and am trying to convert the length between two x and y points into real measurements.
For example, after a shape is drawn using the polygon tool, I want to get the length of the first edge.
I get the drawn shape and the first two points on the event shown below, get 2 points, and get the distance between them. It seems they are in Autodesk Units or something. Is there an easy way to convert the units to feet or inches?
I have found
Edit2DExtension.defaultContext.unitHandler.fromDisplayUnits()
as well as
Edit2DExtension.defaultContext.unitHandler.toDisplayUnits()
and also
Autodesk.Viewing.Private.convertUnits().
I've tried all three, but am unsure how to use them and haven't found any good results with them yet.
There may be a way to do it through Edit2d but I haven't found a way yet and there is next to no documentation I can find on this library.
beforeEdit2DAction(event) {
console.log('After Shape has been drawn -> ', event);
let shape = event.action.shape;
let pointA = shape._loops[0][0]; // Value: {x: 21.393766403198242, y: 20.934386880096092}
let pointB = shape._loops[0][1]; // Value: {x: 25.082155227661133, y: 20.934386880096092}
// Distance between 2 points (Assuming Autodesk units)
let length = Autodesk.Edit2D.Math2D.distance2D(pointA, pointB); // 3.6883888244628906
// Need to convert to real world units (preferably ft or inches)
}
The real length is 29.5 FEET
Any ideas, or comments are welcome! Thanks
Edit: Trying Petr's suggestion here's what it returned:
That's an interested question. The "unit handler" keeps track of two types of units:
layer units (Edit2DExtension.defaultContext.unitHandler.config.layerUnits, can be inch for example)
display units (Edit2DExtension.defaultContext.unitHandler.config.displayUnits)
These two properties control how the actual lengths and areas are displayed. For example, the unit handler's toDisplayUnits method is implemented like so:
toDisplayUnits(fromUnits, value) {
this.updateConfig();
return Autodesk.Viewing.Private.convertUnits(fromUnits, this.config.displayUnits, this.config.scaleFactor, value);
}
With that, configuring fromUnits and displayUnits (and scale) properly should give you the real measurements you need.

Converting altitude to z-level (and vice versa)

When using ol3-cesium and the map is in 3d mode, calling map.getView().getZoom() returns undefined. This might affect setZoom as well.
I understand we are in a 3d world, so there are no z-levels as in the tiled maps. On the other hand, Google Maps calculates a z-equivalent when coming back grom 3d to 2d.
How can I convert from height to a z-equivalent? Any formula, taking into account the latitude and altitude, to get the z equivalent?
There's no easy formula to get a 2D "Z" value from 3D, because the 3D camera can be tilted, can see different levels of tiles in the foreground vs the background, etc.
For individual tiles however, there are specific known "Level" values from the imagery quadtree. You can see these in Cesium Inspector by clicking the little + next to the word Terrain on the right side, and then put a checkmark on Show tile coordinates. The coordinates shown include L, X, and Y, where L is the tile's level (0 being the most zoomed-out, higher numbers more zoomed in), and X and Y are 2D positions within the imagery layer.
I posted an answer on GIS SE that shows how to reach in and grab these tiles, the same way Cesium Inspector does, along with a number of caveats involved. You could potentially look for the highest-level visible tile, and use that as your "Z" value.
I know this is not accurate, but sharing in case this is of use to anyone.
I have moved to several altitudes in Google Maps, switching between the 2D and 3D maps, writing down the z or altitude shown in the address bar:
z altitude (metres)
----- -----------------
3 10311040
4 5932713
5 2966357
6 1483178
7 741589
8.6 243624
11.35 36310
13.85 6410
15.26 2411
17.01 717
18.27 214
19.6 119
20.77 50
21 44
With the above correspondences, I have approximated the following function:
function altitudeToZoom(altitude) {
var A = 40487.57;
var B = 0.00007096758;
var C = 91610.74;
var D = -40467.74;
return D+(A-D)/(1+Math.pow(altitude/C, B));
}
Based on your formula, the reverse conversion should be:
altitude = C * Math.pow((A-D)/(zoomLevel-D) -1, 1/B);

Freefem++: Solving poisson equation with numerical function

I am using Freefem++ to solve the poisson equation
Grad^2 u(x,y,z) = -f(x,y,z)
It works well when I have an analytical expression for f, but now I have an f numerically defined (i.e. a set of data defined on a mesh) and I am wondering if I can still use Freefem++.
I.e. typical code (for a 2D problem in this case), looks like the following
mesh Sh= square(10,10); // mesh generation of a square
fespace Vh(Sh,P1); // space of P1 Finite Elements
Vh u,v; // u and v belongs to Vh
func f=cos(x)*y; // analytical function
problem Poisson(u,v)= // Definition of the problem
int2d(Sh)(dx(u)*dx(v)+dy(u)*dy(v)) // bilinear form
-int2d(Sh)(f*v) // linear form
+on(1,2,3,4,u=0); // Dirichlet Conditions
Poisson; // Solve Poisson Equation
plot(u); // Plot the result
I am wondering if I can define f numerically, rather than analytically.
Mesh & space Definition
We define a square unit with Nx=10 mesh and Ny=10 this provides 11 nodes on x axis and the same for y axis.
int Nx=10,Ny=10;
int Lx=1,Ly=1;
mesh Sh= square(Nx,Ny,[Lx*x,Ly*y]); //this is the same as square(10,10)
fespace Vh(Sh,P1); // a space of P1 Finite Elements to use for u definition
Conditions and problem statement
We are not going to use solve but we ll handle matrix (a more sophisticated way to solve with FreeFem).
First we define CL for our problem (Dirichlet ones).
varf CL(u,psi)=on(1,2,3,4,u=0); //you can eliminate border according to your problem state
Vh u=0;u[]=CL(0,Vh);
matrix GD=CL(Vh,Vh);
Then we define the problem. Instead of writing dx(u)*dx(v)+dy(u)*dy(v) I suggest to use macro, so we define div as following but pay attention macro finishes by // NOT ;.
macro div(u) (dx(u[0])+dy(u[1])) //
So Poisson bilinear form becomes:
varf Poisson(u,v)= int2d(Sh)(div(u)*div(v));
After we extract Stifness Matrix
matrix K=Poisson(Vh,Vh);
matrix KD=K+GD; //we add CL defined above
We proceed for solving, UMFPACK is a solver in FreeFem no much attention to this.
set(KD,solver=UMFPACK);
And here what you need. You want to define a value of function f on some specific nodes. I'm going to give you the secret, the poisson linear form.
real[int] b=Poisson(0,Vh);
You define value of the function f at any node you want to do.
b[100]+=20; //for example at node 100 we want that f equals to 20
b[50]+=50; //and at node 50 , f equals to 50
We solve our system.
u[]=KD^-1*b;
Finally we get the plot.
plot(u,wait=1);
I hope this will help you, thanks to my internship supervisor Olivier, he always gives to me tricks specially on FreeFem. I tested it, it works very well. Good luck.
The method by afaf works in the case when the function f is a free-standing one. For the terms like int2d(Sh)(f*u*v), another solution is required. I propose (actually I have red it somewhere in Hecht's manual) an approach that covers both cases. However, it works only for P1 finite elements, for which the degrees of freedom are coincided with the mesh nodes.
fespace Vh(Th,P1);
Vh f;
real[int] pot(Vh.ndof);
for(int i=0;i<Vh.ndof;i++){
pot[i]=something; //assign values or read them from a file
}
f[]=pot;

AS3 Polar coordinate from Cartesian coordinate

I'm playing with body animation in AS3. I did a body with all parts (excluding fingers) and make a XML with the "skeleton". The XML got the instances of each part and the place of the articulation of the next part. I make it work with cardinal coordinates (x,y) and the body moves when I rotate a part and recalculate all the links again (each part in each articulation).
However, this will demand some calculation each little modification of the body, so now I'm optimizing it. As for de design x,y is easier, so when the body instance is created, the class re-build the XML converting coordinates to Polar system (r,t), like this ("Quadro" is the node with coordinates):
dx = Quadro.#x;
dy = Quadro.#y;
Quadro.#r = Math.sqrt(Math.pow(dx,2) + Math.pow(dy,2));
Quadro.#t = (dy>0)? Math.asin(dx/Quadro.#r) : Math.acos(dy/Quadro.#r);
I did some changes to make it work but at list one quadrant is always wrong! In this case, the upper left is wrong. The neck and the head should be in this place and they are in upper right (mirrored).
Any tips for a right conversion in AS3?
Try to use this:
Quadro.#t=Math.atan2(dy,dx);
From Wikipedia:
The Cartesian coordinates x and y can be converted to polar
coordinates r and φ with r ≥ 0 and φ in the interval (−π, π] by:

How to control a kiwi drive robot?

I'm on the FIRST robotics team at my high school, and we are working on developing a kiwi drive robot, where there are three omni wheels mounted in a equilateral triangle configuration, like this:
The problem is programming the robot to drive the motors such that that the robot moves in the direction of a given joystick input. For example, to move "up", motors 1 and 2 would be powered equally, while motor 3 would be off. The joystick position is given as a vector, and I was thinking that if the motors were expressed as vectors too, vector projection might be what I need. However, I'm not sure if this is right, and if it is, how I would apply it. I also have a feeling that there may be multiple solutions to one joystick position. Any help would be greatly appreciated.
I've built 9 robots during my time at school (1 FIRST, 8 RoboCup). We used the same omnidrive layout as you do. Beta's answer looks correct but add rotation to all wheels afterwards:
W1 = -1/2 X - sqrt(3)/2 Y + R
W2 = -1/2 X + sqrt(3)/2 Y + R
W3 = X + R
[This is Beta's formula with some added Rotation]
You need to think about the available ranges for your motors. I am guessing it can take a PWM signal of +/-255, so either the input or the output has to be adjusted somewhat. (It's not that hard...)
A good paper with details
To answer your specific questions: Vector projection is essentially what you are doing here. You apply it by having a matrix M, your input from the joystick I and your output to the motors O. Thus O = M * I;
M = [(-0.5 -sqrt(3)/2 +1)
(-0.5 +sqrt(3)/2 +1)
(1 0 +1)]
First let's define some terms. In keeping with the usual convention, the X axis will point to the right and the y axis will point up (so that the thrust of wheel 3 is along the X axis). We'll call the motion of the wheels W1, W2 and W3, each defined so that Wi > 0 means that the wheel rotates in the clockwise direction. In your example, if W1 < 0, W2 = W1 and W3 = 0, the robot will move in the +Y direction.
If all three wheels rotated at the same rate (W1 = W2 = W3) the robot would rotate in place. I'm guessing you don't want that, so the sum of the rotations must be zero: W1 + W2 + W3 = 0.
The motion of each wheel contributes to the motion of the robot; they add as vectors:
W1 = -1/2 X - sqrt(3)/2 Y
W2 = -1/2 X + sqrt(3)/2 Y
W3 = X
So if you know the desired X and Y from the joystick, you have W1, W2 and W3. As we've already seen, the difference between W1 and W2 is what drives Y motion. Their sum drives motion in X.
Though this system can be solved mathematically, in 2002, FIRST Team 857 chose to solve it mechanically. Our control system used three joysticks mounted with their X-axes forming an equilateral triangle, and handles replaced with ball-socket arms connected with a Y-shaped yoke. Map the X-axis of each stick directly to a motor speed, and the control system has been solved. As an advantage, this system is very intuitive for laypeople to run--push the yoke in the direction you want to go, rotate it to turn.
As you have recognized, the first part of this will be finding an appropriate equation to represent the resultant motion for any motor settings. Depending on the level of control and feedback you have on your motor speeds, I would suggest the process you go thorough should start with writing a vector equation: (define positive X as straight ahead)
-M1Cos(30)+M2Cos(30)=X (the negative is because 1 and 2 must be powered the same magnitude, but opposite polarities for forward motion)
M1Sin(30)+M2Sin(30)-M3 = Y (as anticlockwise motion on 1 and 2 will result in the robot moving left in the Y and anticlockwise motion on 3 will result in the robot moving to the right)
The other input that you need to add into this is the desired rotation of the robot, thankfully, M1+M2+M3 = W (Rotational velocity)
Your joystick input will give you X,Y and W, so you have 3 equations with 3 unknowns.
From here it is simultaneous equations, so you may end up with multiple solutions, but these can generally be restricted based on possible motor speeds and the like.
An example of this is the rec::robotino::com::OmniDrive Class - the source code for this method is available too...