I'm trying to get the change in orientation between two deviceorientation events along the left-right axis, and top-bottom axis, those axis being usually defined as the phone x and y axis (https://developer.mozilla.org/en-US/docs/Web/Guide/Events/Orientation_and_motion_data_explained)
ie between instants t1 and t2 where those phone axis move from (x1, y1) to (x2, y2), It'd like to get (angle(x2-x1), angle(y1-y2)).
When the device is in portrait mode (in opposition to landscape mode), those axis seems to correspond to the beta and gamma. However when the phone is vertical (bottom facing the ground), the gamma value becomes extremely instable, and jumps from 90 to -90 degrees (at the same occasion, the alpha jumps by 180 degrees) You can easily see that here on your phone
I'd like to avoid that, and also get values in the 360 range. Here is what I have so far:
// assuming portrait mode
var beta0, gamma0;
window.addEventListener('deviceorientation', function(orientation) {
if (typeof beta0 === 'undefined') {
beta0 = beta;
gamma0 = gamma;
}
console.log('user has moved to the left by', gamma - gamma0, ' and to the top by', beta - beta0);
});
That works ok when the device is mostly horizontal, and not at all when it is vertical
All right. First, a simple explanation of the device orientation input:
The absolute coordinate system, (X, Y, Z) is such that X is East, Y is North and Z is up. The device relative coordinate system, (x, y, z) is such that x is right, y is top and z is up. Then the orientation angles, (alpha, beta, gamma) are the angles that describe the succession of three simple rotations that change (X, Y, Z) to (x, y, z) as so:
rotate around Z by alpha degrees, which transforms (X, Y, Z) to (X', Y', Z') with Z' = Z
rotate around X' by beta degrees, which transforms (X', Y', Z') to (X'', Y'', Z'') with X'' = X'
rotate around Y'' by gamma degrees, which transforms (X'', Y'', Z'') to (x, y, z) with y = Y''
(they are called intrinsic Tait-Bryan angles of type Z-X'-Y'')
Now we can get the corresponding rotation matrix by composing simple rotation matrix that each correspond to one of the three rotations.
[ cC 0 sC ] [ 1 0 0 ] [ cA -sA 0 ]
R(A, B, C) = Ry(C)*Rx(B)*Rz(A) = | 0 1 0 |*| 0 cB -sB |*[ sA cA 0 ]
[ -sC 0 cC ] [ 0 sB cB ] [ 0 0 1 ]
where A, B, C are short for alpha, beta, gamma and s, c for sin, cos.
Now, we are interested in the angles of the right-left (y axis) and top-down (x axis) rotations deltas between two positions (x, y, z) and (x', y', z') that correspond to the orientations (A, B, C) and (A', B', C')
The coordinates of (x', y', z') in term of (x, y, z) are given by R(A', B', C') * R(A, B, C)^-1 = R(A', B', C') * R(A, B, C)^T since the inverse is the transpose for orthogonal (rotation) matrix. Finally, if z' = p*x + q*y + r*z, the angle of those rotations are p around the right-left axis and q around the top-down one (this is true for small angles, which assume frequent orientation update, else asin(p) and asin(r) are closer from the truth)
So here is some javascript to get the rotation matrix:
/*
* gl-matrix is a nice library that handles rotation stuff efficiently
* The 3x3 matrix is a 9 element array
* such that indexes 0-2 correspond to the first column, 3-5 to the second column and 6-8 to the third
*/
import {mat3} from 'gl-matrix';
let _x, _y, _z;
let cX, cY, cZ, sX, sY, sZ;
/*
* return the rotation matrix corresponding to the orientation angles
*/
const fromOrientation = function(out, alpha, beta, gamma) {
_z = alpha;
_x = beta;
_y = gamma;
cX = Math.cos( _x );
cY = Math.cos( _y );
cZ = Math.cos( _z );
sX = Math.sin( _x );
sY = Math.sin( _y );
sZ = Math.sin( _z );
out[0] = cZ * cY + sZ * sX * sY, // row 1, col 1
out[1] = cX * sZ, // row 2, col 1
out[2] = - cZ * sY + sZ * sX * cY , // row 3, col 1
out[3] = - cY * sZ + cZ * sX * sY, // row 1, col 2
out[4] = cZ * cX, // row 2, col 2
out[5] = sZ * sY + cZ * cY * sX, // row 3, col 2
out[6] = cX * sY, // row 1, col 3
out[7] = - sX, // row 2, col 3
out[8] = cX * cY // row 3, col 3
};
and now we get the angular deltas:
const deg2rad = Math.PI / 180; // Degree-to-Radian conversion
let currentRotMat, previousRotMat, inverseMat, relativeRotationDelta,
totalRightAngularMovement=0, totalTopAngularMovement=0;
window.addEventListener('deviceorientation', ({alpha, beta, gamma}) => {
// init values if necessary
if (!previousRotMat) {
previousRotMat = mat3.create();
currentRotMat = mat3.create();
relativeRotationDelta = mat3.create();
fromOrientation(currentRotMat, alpha * deg2rad, beta * deg2rad, gamma * deg2rad);
}
// save last orientation
mat3.copy(previousRotMat, currentRotMat);
// get rotation in the previous orientation coordinate
fromOrientation(currentRotMat, alpha * deg2rad, beta * deg2rad, gamma * deg2rad);
mat3.transpose(inverseMat, previousRotMat); // for rotation matrix, inverse is transpose
mat3.multiply(relativeRotationDelta, currentRotMat, inverseMat);
// add the angular deltas to the cummulative rotation
totalRightAngularMovement += Math.asin(relativeRotationDelta[6]) / deg2rad;
totalTopAngularMovement += Math.asin(relativeRotationDelta[7]) / deg2rad;
}
Finally, to account for screen orientation, we have to replace
_z = alpha;
_x = beta;
_y = gamma;
by
const screen = window.screen;
const getScreenOrientation = () => {
const oriented = screen && (screen.orientation || screen.mozOrientation);
if (oriented) switch (oriented.type || oriented) {
case 'landscape-primary':
return 90;
case 'landscape-secondary':
return -90;
case 'portrait-secondary':
return 180;
case 'portrait-primary':
return 0;
}
return window.orientation|0; // defaults to zero if orientation is unsupported
};
const screenOrientation = getScreenOrientation();
_z = alpha;
if (screenOrientation === 90) {
_x = - gamma;
_y = beta;
}
else if (screenOrientation === -90) {
_x = gamma;
_y = - beta;
}
else if (screenOrientation === 180) {
_x = - beta;
_y = - gamma;
}
else if (screenOrientation === 0) {
_x = beta;
_y = gamma;
}
Note that the cumulative right-left and top-bottom angles will depend of the path chosen by the user, and cannot be infer directly from the device orientation but have to be tracked through the movement. You can arrive to the same position with different movements:
method 1:
keep your phone horizontal and rotate by 90 degrees clockwise. (this is neither a left-right nor a top-bottom rotation)
keep your phone in landscape mode and rotate by 90 toward you. (this is neither a 90 degrees left-right rotation)
keep your phone facing you and rotate by 90 so that it's up. (this is neither a 90 degrees left-right rotation)
method 2:
rotate the phone by 90 degrees so that it faces you and is vertical (this is a 90 degrees top-bottom rotation)
Related
I am sure I have missed something obvious, but I am trying to draw a quadratic curve between two points using an html canvas, for which I need a 'control point' to set the curve. The start and end point of the curve are known, the control point is unknown because the lines are dynamically rotated. I just need to find this third point of the triangle in order to set the control point
I use this function to find the mid point of the line:
lineMidPoint(p: Point, q: Point): Point {
let x = (p.x + q.x) / 2;
let y = (p.y + q.y) / 2;
return { x: x, y: y } as Point;
}
This function works as expected.
Then a second function to get the angle of the line relative to the origin:
getAngleRelativeToOrigin(start: Point, end: Point): number {
let dx = start.x - end.x;
let dy = start.y - end.y;
let radians = Math.atan2(dy, dx);
return radians * (180/Math.PI);
}
It is hard to verify that this function is working.
Then finally I have a function for rotating the midpoint around either the start or the end of the line in order to find the control point:
getControlPoint(start: Point, end: Point): Point {
let midPoint = this.lineMidPoint(start, end);
let offset = 45 * (Math.PI / 180);
let theta = this.getAngleRelativeToOrigin(start, end) + offset;
let x = Math.cos(theta) * (start.x - midPoint.x) - Math.sin(theta) * (start.y - midPoint.y) + midPoint.x;
let y = Math.sin(theta) * (start.x - midPoint.x) - Math.cos(theta) * (start.y - midPoint.y) + midPoint.y;
return { x: x, y: y } as Point;
}
The result is this:
Those lines that are not connected to circles (for instance on the far right) should all be the length of the line they start from / 2, but they are clearly inconsistent.
When I draw the quadratic curves they are all wonky:
Can anyone lend a hand and tell me where Ive gone wrong?
OK, your middle point is correct.
Now determine difference vector and perpendicular to the line
let dx = start.x - end.x;
let dy = start.y - end.y;
let leng = Math.hypot(dx, dy);
let px = - dy / leng; //and get perpendicular unit vector
let py = dx / leng;
I am not sure what logic you wanted to implement, so I propose to get control point at distance d from line middle (so curve is symmetrical)
let xxx = midPoint.x + d * px;
let yyy = midPoint.y + d * py;
If you want to rotate middle point about start point, it might be done using the next approach:
let cost = Math.cos(45 * (Math.PI / 180));
let sint = Math.sin(45 * (Math.PI / 180));
let x = start.x + 0.5 * dx * cost - 0.5 * dy * sint;
let y = start.y + 0.5 * dx * sint + 0.5 * dy * cost;
I'm trying to write a shader which renders a cube map / cube texture as an equirectangular projection.
The main part of this is done however I get white lines between the faces.
My methodology is:
Starting from UV ([0,1]x[0,1])
Transform to [-1,1]x[-1,1] and than to [-180,180]x[-90,90]
These are now long lat, which can be transformed into 3D (xyz)
Get the face they belong to, as well as their position within this face ([-1,1]x[-1,1])
Transform this face position to a UV within the cube texture
At first I thought the output of step 4 was wrong and that I was sampling from outside the texture, but even after multiplying the face coordinates by 1/2, I still get the white lines.
reference: https://codepen.io/coutteausam/pen/jOKKYYy
float max3(vec3 v) {
return max(max(v.x, v.y), v.z);
}
vec2 sample_cube_map_1(vec3 xyz, out float faceIndex) {
xyz /= length(xyz);
float m = max3(abs(xyz));
if (abs(xyz.x) == m) {
faceIndex = sign(xyz.x);
return xyz.yz / abs(xyz.x);
}
if (abs(xyz.y) == m) {
faceIndex = 2. * sign(xyz.y);
return xyz.xz / abs(xyz.y);
}
if (abs(xyz.z) == m) {
faceIndex = 3. * sign(xyz.z);
return xyz.xy / abs(xyz.z);
}
faceIndex = 1.0;
return vec2(0., 0.);
}
vec2 sample_cube_map(vec3 xyz) {
float face;
vec2 xy = sample_cube_map_1(xyz, face);
xy = (xy + 1.) / 2.; // [-1,1] -> [0,1]
xy.x = clamp(xy.x, 0., 1.);
xy.y = clamp(xy.y, 0., 1.);
if (face == 1.) {
// front
xy += vec2(1., 1.);
}
else if (face == -1.) {
//back
xy.x = 1. - xy.x;
xy += vec2(3., 1.);
}
else if (face == 2.) {
// right
xy.x = 1. - xy.x;
xy += vec2(2., 1.);
}
else if (face == -2.) {
// left
xy += vec2(0., 1.);
}
else if (face == 3.) {
// top
xy = vec2(xy.y, 1. - xy.x);
xy += vec2(1., 2.);
}
else if (face == -3.) {
// bottom
xy = xy.yx;
xy += vec2(1., 0.);
}
else {
xy += vec2(1., 0.);
}
return xy / vec2(4., 3.); // [0,4]x[0,3] -> [0,1]x[0,1]
}
// projects
// uv:([0,1] x [0,1])
// to
// xy:([ -2, 2 ] x [ -1, 1 ])
vec2 uv_2_xy(vec2 uv) {
return vec2(uv.x * 4. - 2., uv.y * 2. - 1.);
}
// projects
// xy:([ -2, 2 ] x [ -1, 1 ])
// to
// longlat: ([ -pi, pi ] x [-pi/2,pi/2])
vec2 xy_2_longlat(vec2 xy) {
float pi = 3.1415926535897932384626433832795;
return xy * pi / 2.;
}
vec3 longlat_2_xyz(vec2 longlat) {
return vec3(cos(longlat.x) * cos(longlat.y), sin(longlat.x) * cos(longlat.y), sin(longlat.y));
}
vec3 uv_2_xyz(vec2 uv) {
return longlat_2_xyz(xy_2_longlat(uv_2_xy(uv)));
}
vec3 roty(vec3 xyz, float alpha) {
return vec3(cos(alpha) * xyz.x + sin(alpha) * xyz.z, xyz.y, cos(alpha) * xyz.z - sin(alpha) * xyz.x);
}
varying vec2 vUv;
uniform sampler2D image;
uniform float time;
void main() {
vec3 xyz = uv_2_xyz(vUv);
xyz = roty(xyz, time);
vec2 uv = sample_cube_map(xyz);
vec4 texturePixel = texture2D(image, vec2(clamp(uv.x, 0., 1.), clamp(uv.y, 0., 1.)));
gl_FragColor = texturePixel;
}
The math behind your shader looks sound. However, you need to consider how texture sampling behaves when dealing with sub-pixels. Take a look at your source texture:
When you cross the boundary between Top to Right, your sampler will try to squeeze all the texture between magenta and teal into a single pixel. Since there's lots of white in that area, that squeezed pixel will be mostly white. Notice this doesn't happen between Top to Front, because there's no white area between those two faces. (Read: mipmapping to see how textures behave when scaled down)
Solutions:
You might be able to sample the nearest full pixel, instead of trying to blend in between them, by turning off mipmapping. To do so, you can change the texture's minification filter to linear filtering.
const imgTexture = new THREE.TextureLoader().load('https://i.imgur.com/tBzfYG5.png');
imgTexture.minFilter = THREE.LinearFilter;
The only downside is that you might get some hard edges along the boundaries.
If your project allows it, you could simply break up your texture into six images, then use the cubemap method offered by Three.js
I'm using WebGL globe from http://workshop.chromeexperiments.com/globe/. If any point of the globe is clicked, I need to get the longitude and latitude of that point. These parameters are to be passed to the Google Maps for 2D map.
How can I get the long. and lat. from the webgl globe?
Through this function I'm getting the double clicked point, and through this point I'm finding the long. and lat. But the results are not correct. It seems that the clicked point is not determined properly.
function onDoubleClick(event) {
event.preventDefault();
var vector = new THREE.Vector3(
( event.clientX / window.innerWidth ) * 2 - 1,
-( event.clientY / window.innerHeight ) * 2 + 1,
0.5
);
projector.unprojectVector(vector, camera);
var ray = new THREE.Ray(camera.position, vector.subSelf(camera.position).normalize());
var intersects = ray.intersectObject(globe3d);
if (intersects.length > 0) {
object = intersects[ 0 ];
console.log(object);
r = object.object.boundRadius;
x = object.point.x;
y = object.point.y;
z = object.point.z;
console.log(Math.sqrt(x * x + y * y + z * z));
lat = 90 - (Math.acos(y / r)) * 180 / Math.PI;
lon = ((270 + (Math.atan2(x, z)) * 180 / Math.PI) % 360) - 180;
console.log(lat);
console.log(lon);
}
}
Get the WebGL Globe here https://github.com/dataarts/webgl-globe/archive/master.zip
You can open it directly on Mozilla, if you open it in Chrome it works with earth surface image lack because of Cross-Origin Resource Sharing policy. It needs to be put in a virtual host.
Try to use the function in this way
function onDoubleClick(event) {
event.preventDefault();
var canvas = renderer.domElement;
var vector = new THREE.Vector3( ( (event.offsetX) / canvas.width ) * 2 - 1, - ( (event.offsetY) / canvas.height) * 2 + 1,
0.5 );
projector.unprojectVector( vector, camera );
var ray = new THREE.Ray(camera.position, vector.subSelf(camera.position).normalize());
var intersects = ray.intersectObject(globe3d);
if (intersects.length > 0) {
object = intersects[0];
r = object.object.boundRadius;
x = object.point.x;
y = object.point.y;
z = object.point.z;
lat = 90 - (Math.acos(y / r)) * 180 / Math.PI;
lon = ((270 + (Math.atan2(x, z)) * 180 / Math.PI) % 360) - 180;
lat = Math.round(lat * 100000) / 100000;
lon = Math.round(lon * 100000) / 100000;
window.location.href = 'gmaps?lat='+lat+'&lon='+lon;
}
}
I used the code you share with a little correction and it works great.
The way to let it work correctly is to understand exactly what you pass to the new THREE.Vector3.
This function need three parameters (x, y, z)
z in our/your case is sculpted as 0.5 and it's ok
x and y must be a number among -1 and 1, so, to obtain this values you need to catch the click coordinates on your canvas and then, with this formula, reduce them to a value in this range (-1...0...1);
var vectorX = ((p_coord_X / canvas.width ) * 2 - 1);
var vectorY = -((p_coord_Y / canvas.height ) * 2 - 1);
where p_coord_X and p_coord_Y are the coordinates of the click (referred to the left top corner of your canvas) and canvas is the canvas area where lives your webgl globe.
The problem is how to get the click X and Y coordinates in pixel, because it depends by how your canvas is placed in your HTML enviroment.
For my cases the solution over proposed where not suitable cause i returned always false results; so i build a solution to get extacly the x and y coordinates of my canvas area as i clicked on it (i had for my case too to insert a scrollY page correction).
Now imagine to devide in 4 square your canvas area, a click in the NW quadrant will return for example a -0.8, -05 x and y values, a click in SE i.e. a couple of 0.6, 0.4 values.
The ray.intersectObject() function that follows uses then our click-vector-converted data to understand if our click intersects the globe, if it matches, return correctly the coordinates lat and lon.
again I am still trying to get my lowpass filter running, but I am at a point where I do not know why this is still not running. I oriented my code according to FFT Filters and my previous question FFT Question in order to apply an ideal low pass filter to the image. The code below just makes the image darker and places some white pixels in the resulting image.
// forward fft the result is in freqBuffer
fftw_execute(forward);
for (int y = 0; y < h; y++)
{
for (int x = 0; x < w; x++)
{
uint gid = y * w + x;
// shifting coordinates normalized to [-0.5 ... 0.5]
double xN = (x - (w / 2)) / (double)w;
double yN = (y - (h / 2)) / (double)h;
// max radius
double maxR = sqrt(0.5f * 0.5f + 0.5f * 0.5f);
// current radius normalized to [0 .. 1]
double r = sqrt(xN * xN + yN * yN) / maxR ;
// filter response
double filter = r > 0.7f ? 0.0f : 1.0f;
// applying filter response
freqBuffer[gid][0] *= filter;
freqBuffer[gid][1] *= filter;
}
}
// normlization (see fftw scaling)
for (uint i = 0; i < size; i++)
{
freqBuffer[i][0] /= (float)size;
freqBuffer[i][1] /= (float)size;
}
// backward fft
fftw_execute(backward);
Some help would be appreciated.
Wolf
If you have a filter with a step response in the frequency domain then you will see significant sin(x)/x ringing in the spatial domain. This is known as the Gibbs Phenomenon. You need to apply a window function to the desired frequency response to mitigate this.
how come this doesn't work? does rotate only work with images?
context.moveTo(60,60);
context.lineTo(200,60);
context.lineTo(200,200);
context.lineTo(60,200);
context.lineTo(60,60);
context.stroke();
context.rotate(45 * Math.PI / 180);
context.restore();
You are rotating the whole canvas when you use context.rotate, and since the pivot point is defaulted at the coordinates (0, 0), your square sometimes will be drawn out of bounds.
By moving the pivot point to the middle of the square, you can then rotate it successfully.
Note: Make sure you rotate the canvas before you draw the square.
// pivot point coordinates = the center of the square
var cx = 130; // (60+200)/2
var cy = 130; // (60+200)/2
// Note that the x and y values of the square
// are relative to the pivot point.
var x = -70; // cx + x = 130 - 70 = 60
var y = -70; // cy + y = 130 - 70 = 60
var w = 140; // (cx + x) + w = 60 + w = 200
var h = 140; // (cy + y) + h = 60 + h = 200
var deg = 45;
context.save();
context.translate(cx, cy);
context.rotate(deg * Math.PI/180);
context.fillRect(x, y, w, h);
context.restore();
Explanation:
context.save(); saves the current state of the coordinate system.
context.translate(cx, cy); moves the pivot point.
context.rotate(deg * Math.PI/180); rotates the square to deg degrees (Note that the parameter is in radians, not degrees)
context.fillRect( x, y, w, h ); draws the square
context.restore(); restores the last state of the coordinate system.
Here is a JS Fiddle example.
Here is another JS Fiddle example that features a HTML5 slider.