Has any one tried using libgdx on project tango? I have the problem updating the camera based on pose data. I am using the following code:
conversionQuaternion = new Quaternion(M_SQRT_2_OVER_2,
-M_SQRT_2_OVER_2, 0.0f, 0.0f); // This is used to convert from start
// of service coordinates to open gl
// coordinates quaternion
position.set((float) pose.translation[0], (float) pose.translation[2],
-(float) pose.translation[1]);
rotation.set((float) pose.rotation[0], (float) pose.rotation[1],
(float) pose.rotation[2], (float) pose.rotation[3]);
rotation.mulLeft(conversionQuaternion);
Assets.cam.up.set(0, 1, 0);
Assets.cam.direction.set(0, 0, -1);
Assets.cam.position.set(position);
Assets.cam.rotate(rotation);
Assets.cam.update();
Related
If both use hardware acceleration (GPU) to execute code, why WebGL is so most faster than Canvas?
I mean, I want to know why at low level, the chain from the code to the processor.
What happens? Canvas/WebGL comunicates directly with Drivers and then with Video Card?
Canvas is slower because it's generic and therefore is hard to optimize to the same level that you can optimize WebGL. Let's take a simple example, drawing a solid circle with arc.
Canvas actually runs on top of the GPU as well using the same APIs as WebGL. So, what does canvas have to do when you draw an circle? The minimum code to draw an circle in JavaScript using canvas 2d is
ctx.beginPath():
ctx.arc(x, y, radius, startAngle, endAngle);
ctx.fill();
You can imagine internally the simplest implementation is
beginPath creates a buffer (gl.bufferData)
arc generates the points for triangles that make a circle and uploads with gl.bufferData.
fill calls gl.drawArrays or gl.drawElements
But wait a minute ... knowing what we know about how GL works canvas can't generate the points at step 2 because if we call stroke instead of fill then based on what we know about how GL works we need a different set of points for a solid circle (fill) vs an outline of a circle (stroke). So, what really happens is something more like
beginPath creates or resets some internal buffer
arc generates the points that make a circle into the internal buffer
fill takes the points in that internal buffer, generates the correct set of triangles for the points in that internal buffer into a GL buffer, uploads them with gl.bufferData, calls gl.drawArrays or gl.drawElements
What happens if we want to draw 2 circles? The same steps are likely repeated.
Let's compare that to what we would do in WebGL. Of course in WebGL we'd have to write our own shaders (Canvas has its shaders as well). We'd also have to create a buffer and fill it with the triangles for a circle, (note we already saved time as we skipped the intermediate buffer of points). We then can call gl.drawArrays or gl.drawElements to draw our circle. And if we want to draw a second circle? We just update a uniform and call gl.drawArrays again skipping all the other steps.
const m4 = twgl.m4;
const gl = document.querySelector('canvas').getContext('webgl');
const vs = `
attribute vec4 position;
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * position;
}
`;
const fs = `
precision mediump float;
uniform vec4 u_color;
void main() {
gl_FragColor = u_color;
}
`;
const program = twgl.createProgram(gl, [vs, fs]);
const positionLoc = gl.getAttribLocation(program, 'position');
const colorLoc = gl.getUniformLocation(program, 'u_color');
const matrixLoc = gl.getUniformLocation(program, 'u_matrix');
const positions = [];
const radius = 50;
const numEdgePoints = 64;
for (let i = 0; i < numEdgePoints; ++i) {
const angle0 = (i ) * Math.PI * 2 / numEdgePoints;
const angle1 = (i + 1) * Math.PI * 2 / numEdgePoints;
// make a triangle
positions.push(
0, 0,
Math.cos(angle0) * radius,
Math.sin(angle0) * radius,
Math.cos(angle1) * radius,
Math.sin(angle1) * radius,
);
}
const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW);
gl.enableVertexAttribArray(positionLoc);
gl.vertexAttribPointer(positionLoc, 2, gl.FLOAT, false, 0, 0);
gl.useProgram(program);
const projection = m4.ortho(0, gl.canvas.width, 0, gl.canvas.height, -1, 1);
function drawCircle(x, y, color) {
const mat = m4.translate(projection, [x, y, 0]);
gl.uniform4fv(colorLoc, color);
gl.uniformMatrix4fv(matrixLoc, false, mat);
gl.drawArrays(gl.TRIANGLES, 0, numEdgePoints * 3);
}
drawCircle( 50, 75, [1, 0, 0, 1]);
drawCircle(150, 75, [0, 1, 0, 1]);
drawCircle(250, 75, [0, 0, 1, 1]);
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
Some devs might look at that and think Canvas caches the buffer so it can just reuse the points on the 2nd draw call. It's possible that's true but I kind of doubt it. Why? Because of the genericness of the canvas api. fill, the function that does all the real work doesn't know what's in the internal buffer of points. You can call arc, then moveTo, lineTo, then arc again, then call fill. All of those points will be in the internal buffer of points when we get to fill.
const ctx = document.querySelector('canvas').getContext('2d');
ctx.beginPath();
ctx.moveTo(50, 30);
ctx.lineTo(100, 150);
ctx.arc(150, 75, 30, 0, Math.PI * 2);
ctx.fill();
<canvas></canvas>
In other words, fill needs to always look at all the points. Another thing, I suspect arc tries to optimize for size. If you call arc with a radius of 2 it probably generates less points than if you call it with a radius of 2000. It's possible canvas caches the points but given the hit rate would likely be small it seems unlikely.
In any case, the point is WebGL let's you take control at a lower level allowing you skip steps that canvas can't skip. It also lets you reuse data that canvas can't reuse.
In fact if we know we want to draw 10000 animated circles we even have other options in WebGL. We could generate the points for 10000 circles which is a valid option. We could also use instancing. Both of those techniques would be vastly faster than canvas since in canvas we'd have to call arc 10000 times and one way or another it would have to generate points for 10000 circles every single frame instead of just once at the beginning and it would have to call gl.drawXXX 10000 times instead of just once.
Of course the converse is canvas is easy. Drawing the circle took 3 lines of code. In WebGL, because you need to setup and write shaders it probably takes at least 60 lines of code. In fact the example above is about 60 lines not including the code to compile and link shaders (~10 lines). On top of that canvas supports transforms, patterns, gradients, masks, etc. All options we'd have to add with lots more lines of code in WebGL. So canvas is basically trading ease of use for speed over WebGL.
Canvas does not execute a pipeline of layers of processing to transition sets of vertices and indices into triangles which then are given textures and lighting all in hardware as does OpenGL/WebGL ... this is the root cause of such speed differences ... Canvas counterparts to such formulations are all done on CPU with only the final rendering sent to the graphics hardware ... speed differences are particularly evident when massive number of such vertices are attempted to be synthesized/animated on Canvas versus WebGL ...
Alas we are on the cusp on hearing the public announcement of the modern replacement to OpenGL : Vulkan who's remit includes exposing general purpose compute in a more pedestrian way than OpenCL/CUDA as well as baking in use of multi-core processors which might just shift Canvas like processing onto hardware
I'm trying to draw an arbitrary polygon with a transformed texture with Graphics API .
Here's what I'm trying to do in 3 steps:
First, I have a texture (as a BitmapData)
Second, Transform the texture - Tile it and rotate it around x, y or z axis. (y-axis for now).
Third, Draw a polygon using the transformed texture.
I could rotate it around z-axis with the code below:
var gr:Graphics = sp.graphics;
gr.clear();
var mat:Matrix = new Matrix();
mat.scale( 0.5, 0.5 );
mat.rotate( angle );
gr.beginBitmapFill( bd, mat, true, true );
gr.moveTo( points[0].x, points[0].y );
for ( var lp1:int = 1; lp1 < points.length; lp1++ )
gr.lineTo( points[lp1].x, points[lp1].y );
gr.lineTo( points[0].x, points[0].y );
gr.endFill();
But I couldn't rotate the texture around x or y axis as it requires some sort of projection I guess.
I thought about drawing a rotated Bitmap object onto a BitmapData and using it as a texture:
var bmp:Bitmap = new Bitmap( bd );
bmp.rotationY = angle;
var transformedBd:BitmapData = new BitmapData( 256, 256, true, 0 );
transformedBd.draw( bmp );
… and call gr.beginBitmapFill() with the transformedBd …
But with this code, the texture won't be tiled.
I also looked at drawTriangles() method but AFIK, it only let me draw a rotated polygon, not a polygon with rotated texture.
If anyone has insights on this issue, please share.
Any help will be greatly appreciated!
Perhaps you can:
put your 2D Texture inside a Sprite or other container
3D transform that container, for example by using
myContainer.rotationX = 20;
myContainer.rotationY = 200;
3 - then you create a new BitmapData()
4 - and you DRAW the entire myContainer into the bitmapdata.
myBitmapData.draw(myContainer, myMatrix, myColorTransform, blendMode, myRectangle, smooth);
5 - and finally you delete the original 2D texture and myContainer.
Voila, you now have a 3d transformed texture inside a single bitmapdata.
I am working in cocos2dx game developing and working on circle gesture detection . I would like to ask how can I find angle between two points.How can i find angle between two points A and B.my ccTouchesMoved event as follows.
void HelloWorld::ccTouchesMoved(CCSet *pTouches, CCEvent *pEvent)
{
CCLog("Touches moved");
CCTouch *touch = (CCTouch*)pTouches->anyObject();
location = touch->getLocation();
location=CCDirector::sharedDirector()->convertToGL(location);
prevLocation=CCDirector::sharedDirector()->convertToGL(touch->getPreviousLocationInView());
deltax=prevLocation.x-location.x;//difference of x
deltay=prevLocation.y-location.x;//difference of y
angle=??// i want this angle using deltax and deltay
}
You need to include math header and you can calculate angle in degree using formula:
angle = atan2 (deltay, deltax) * (180 / PI);
I'm trying to develop heat map, now initially I would have to draw the intensity mask, and since I'm using GWT so I have randomly generated some coordinates and placed my circles ( with required gradience ) at those locations so the output comes out to be circles overlapping each other. And If I look at the intensity mask from Dylan Vester, it comes to be very smooth How can I draw my heat map ?? Also how the output is achieved similar to Dylan Vester?? Question also is if I'm drawing circles then how to decide the intensity at the intersection of two or more circles, how they have achieved ?? Here is my code
// creating the object for the heat points
Heat_Point x = new Heat_Point();
// Variables for random locations
int Min = 1,Max = 300;
int randomx,randomy;
// Generating set of random values
for( int i = 0 ; i < 100 ; i++ ) {
// Generating random x and y coordinates
randomx = Min + (int)(Math.random() * ((Max - Min) + 1));
randomy = Min + (int)(Math.random() * ((Max - Min) + 1));
// Drawing the heat points at generated locations
x.Draw_Heatpoint(c1, randomx, randomy);
}
And Here is how I'm plotting my heat point that is Heat_Point class
Context con1 = c1.getContext2d(); // c1 is my canvas
CanvasGradient x1;
x1 = ((Context2d) con1).createRadialGradient(x,y,10,x,y,20);
x1.addColorStop(0,"black");
x1.addColorStop(1,"white");
((Context2d) con1).beginPath();
((Context2d) con1).setFillStyle(x1);
((Context2d) con1).arc(x,y,20, 0, Math.PI * 2.0, true);
((Context2d) con1).fill();
((Context2d) con1).closePath();`
here I was supposed to add some images but I didn't have enough reputation :D :P
I took a quick look at HeatmapJS (http://www.patrick-wied.at/static/heatmapjs/) and it seems he uses radial gradients (like you have above) and he also uses opacity and a color filter called "multiply blend" to smooth out the intensity of the colors in the heat map.
His code is quite impressive. It's open source, so you might want to check it out!
I'm trying to add VBO in slick2D. All I find on the web is how to initialize VBO in a 3D context. Anyone knows how to do it in 2D ?
My actual test (make 4 square in slick context) make this (i add corrds in black) :
(source: canardpc.com)
.
Below my init (in the init method of my GameState) :
// set up OpenGL
GL11.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
GL11.glEnableClientState(GL11.GL_VERTEX_ARRAY);
GL11.glEnableClientState(GL11.GL_COLOR_ARRAY);
GL11.glMaterial(GL11.GL_FRONT, GL11.GL_SPECULAR, floatBuffer(1.0f, 1.0f, 1.0f, 1.0f));
GL11.glMaterialf(GL11.GL_FRONT, GL11.GL_SHININESS, 25.0f);
// set up the camera
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glLoadIdentity();
// create our vertex buffer objects
IntBuffer buffer = BufferUtils.createIntBuffer(1);
GL15.glGenBuffers(buffer);
int vertex_buffer_id = buffer.get(0);
FloatBuffer vertex_buffer_data = BufferUtils.createFloatBuffer(vertex_data_array.length);
vertex_buffer_data.put(vertex_data_array);
vertex_buffer_data.rewind();
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vertex_buffer_id);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, vertex_buffer_data, GL15.GL_STATIC_DRAW);
And the render (in the render method of game state) :
g.setDrawMode(Graphics.MODE_ALPHA_BLEND) ;
// perform rotation transformations
GL11.glPushMatrix();
// render the cube
GL11.glVertexPointer(3, GL11.GL_FLOAT, 28, 0);
GL11.glColorPointer(4, GL11.GL_FLOAT, 28, 12);
GL11.glDrawArrays(GL11.GL_QUADS, 0, vertex_data_array.length / 7);
// restore the matrix to pre-transformation values
GL11.glPopMatrix();
I think something wrong because all other render disappear (text and sprites) and coords are not window size anymore.
edit : I try something like this GL11.glOrtho(0,800,600,0,-1,1); with strange result
Thanks
I resolv the issue by adding GL11.glOrtho(0,800,600,0,-1,1); and disabling glEnableClientState (glDisableClientState).
But I will finally move to libgdx framework whoes do that natively.