In Java using OpenGL I could setup matrices for world coordinates like this:
GL.glMatrixMode(GL.GL_PROJECTION);
GL.glLoadIdentity();
// window size is 640x480
// viewport size is 8x6 (e.g. in meters, so you see only 8x6 meters of the world in a flash game)
GL.glOrtho(0, 8, 0, 6, -1, 1);
How can I do the same in ActionScript? When my tile size is 80px I want to say
mySprite.x = 1; // 80 pixels
mySprite.x = 2; // 160 pixels
mySprite.x = 3; // 240 pixels
and it should make the sprite appear 80, 160 or 240 pixels away from the left.
Are there no equivalent projection possibilities in AS3?
Use a spark.components.Group. This has no background or anything else like that, so even though it kind of replaces a Canvas, it doesn't really. It's just a group of UIComponents essentially. That being said, set the Group's x field to 79, and call the Group's addElement() function on mySprite. Then if you set mySprite's x field to 1, that's considered to be 1 in relation to the Group, which is already 79 pixels from the left as a whole. So 79 + 1 = 80.
var group:Group = new Group();
group.x = 79;
group.addElement(mySprite);
mySprite.x = 1;
You can define a superclass that redefines the getters and setters for x and y, then you derive each of your MovieClips from this class.
Related
Let's say your Starling display-list is as follows:
Stage
|___MainApp
|______Canvas (filter's target)
Then, you decide your MainApp should be rotated 90 degrees and offset a bit:
mainApp.rotation = Math.PI * 0.5;
mainApp.x = stage.stageWidth;
But all of a sudden, the filter keeps on applying itself to the target (canvas) in the angle it was originally (as if the MainApp was still at 0 degrees).
(notice in the GIF how the Blur's strong horizontal value continues to only apply horizontally although the parent object turned 90 degrees).
What would need to be changed to apply the filter to the target object before it gets it's parents transform? That way (I'm assuming) the filter's result would get transformed by the parent objects.
Any guess as to how this could be done?
https://github.com/bigp/StarlingShaderIssue
(PS: the filter I'm actually using is custom-made, but this BlurFilter example shows the same issue I'm having with the custom one. If there's any patching-up to do in the shader code, at least it wouldn't necessarily have to be done on the built-in BlurFilter specifically).
I solved this myself with numerous trial and error attempts over the course of several hours.
Since I only needed the shader to run in either at 0 or 90 degrees (not actually tweened like the gif demo shown in the question), I created a shader with two specialized sets of AGAL instructions.
Without going in too much details, the rotated version basically requires a few extra instructions to flip the x and y fields in the vertex and fragment shader (either by moving them with mov or directly calculating the mul or div result into the x or y field).
For example, compare the 0 deg vertex shader...
_vertexShader = [
"m44 op, va0, vc0", // 4x4 matrix transform to output space
"mov posOriginal, va1", // pass texture positions to fragment program
"mul posScaled, va1, viewportScale", // pass displacement positions (scaled)
].join("\n");
... with the 90 deg vertex shader:
_vertexShader = [
"m44 op, va0, vc0", // 4x4 matrix transform to output space
"mov posOriginal, va1", // pass texture positions to fragment program
//Calculate the rotated vertex "displacement" UVs
"mov temp1, va1",
"mov temp2, va1",
"mul temp2.y, temp1.x, viewportScale.y", //Flip X to Y, and scale with viewport Y
"mul temp2.x, temp1.y, viewportScale.x", //Flip Y to X, and scale with viewport X
"sub temp2.y, 1.0, temp2.y", //Invert the UV for the Y axis.
"mov posScaled, temp2",
].join("\n");
You can ignore the special aliases in the AGAL example, they're essentially posOriginal = v0, posScaled = v1 variants and viewportScale = vc4constants, then I do a string-replace to change them back to their respective registers & fields ).
Just a human-readable trick I use to avoid going insane. \☻/
The part that I struggled with the most was calculating the correct scale to adjust the UV's scale (with proper detection to Stage / Viewport resize and render-texture size shifts).
Eventually, this is what I came up with in the AS3 code:
var pt:Texture = _passTexture,
dt:RenderTexture = _displacement.texture,
notReady:Boolean = pt == null,
star:Starling = Starling.current;
var finalScaleX:Number, viewRatioX:Number = star.viewPort.width / star.stage.stageWidth;
var finalScaleY:Number, viewRatioY:Number = star.viewPort.height / star.stage.stageHeight;
if (notReady) {
finalScaleX = finalScaleY = 1.0;
} else if (isRotated) {
//NOTE: Notice how the native width is divided with height, instead of same side. Weird, but it works!
finalScaleY = pt.nativeWidth / dt.nativeHeight / _imageRatio / paramScaleX / viewRatioX; //Eureka!
finalScaleX = pt.nativeHeight / dt.nativeWidth / _imageRatio / paramScaleY / viewRatioY; //Eureka x2!
} else {
finalScaleX = pt.nativeWidth / dt.nativeWidth / _imageRatio / viewRatioX / paramScaleX;
finalScaleY = pt.nativeHeight / dt.nativeHeight / _imageRatio / viewRatioY / paramScaleY;
}
Hopefully these extracted pieces of code can be helpful to others with similar shader issues.
Good luck!
If both use hardware acceleration (GPU) to execute code, why WebGL is so most faster than Canvas?
I mean, I want to know why at low level, the chain from the code to the processor.
What happens? Canvas/WebGL comunicates directly with Drivers and then with Video Card?
Canvas is slower because it's generic and therefore is hard to optimize to the same level that you can optimize WebGL. Let's take a simple example, drawing a solid circle with arc.
Canvas actually runs on top of the GPU as well using the same APIs as WebGL. So, what does canvas have to do when you draw an circle? The minimum code to draw an circle in JavaScript using canvas 2d is
ctx.beginPath():
ctx.arc(x, y, radius, startAngle, endAngle);
ctx.fill();
You can imagine internally the simplest implementation is
beginPath creates a buffer (gl.bufferData)
arc generates the points for triangles that make a circle and uploads with gl.bufferData.
fill calls gl.drawArrays or gl.drawElements
But wait a minute ... knowing what we know about how GL works canvas can't generate the points at step 2 because if we call stroke instead of fill then based on what we know about how GL works we need a different set of points for a solid circle (fill) vs an outline of a circle (stroke). So, what really happens is something more like
beginPath creates or resets some internal buffer
arc generates the points that make a circle into the internal buffer
fill takes the points in that internal buffer, generates the correct set of triangles for the points in that internal buffer into a GL buffer, uploads them with gl.bufferData, calls gl.drawArrays or gl.drawElements
What happens if we want to draw 2 circles? The same steps are likely repeated.
Let's compare that to what we would do in WebGL. Of course in WebGL we'd have to write our own shaders (Canvas has its shaders as well). We'd also have to create a buffer and fill it with the triangles for a circle, (note we already saved time as we skipped the intermediate buffer of points). We then can call gl.drawArrays or gl.drawElements to draw our circle. And if we want to draw a second circle? We just update a uniform and call gl.drawArrays again skipping all the other steps.
const m4 = twgl.m4;
const gl = document.querySelector('canvas').getContext('webgl');
const vs = `
attribute vec4 position;
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * position;
}
`;
const fs = `
precision mediump float;
uniform vec4 u_color;
void main() {
gl_FragColor = u_color;
}
`;
const program = twgl.createProgram(gl, [vs, fs]);
const positionLoc = gl.getAttribLocation(program, 'position');
const colorLoc = gl.getUniformLocation(program, 'u_color');
const matrixLoc = gl.getUniformLocation(program, 'u_matrix');
const positions = [];
const radius = 50;
const numEdgePoints = 64;
for (let i = 0; i < numEdgePoints; ++i) {
const angle0 = (i ) * Math.PI * 2 / numEdgePoints;
const angle1 = (i + 1) * Math.PI * 2 / numEdgePoints;
// make a triangle
positions.push(
0, 0,
Math.cos(angle0) * radius,
Math.sin(angle0) * radius,
Math.cos(angle1) * radius,
Math.sin(angle1) * radius,
);
}
const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW);
gl.enableVertexAttribArray(positionLoc);
gl.vertexAttribPointer(positionLoc, 2, gl.FLOAT, false, 0, 0);
gl.useProgram(program);
const projection = m4.ortho(0, gl.canvas.width, 0, gl.canvas.height, -1, 1);
function drawCircle(x, y, color) {
const mat = m4.translate(projection, [x, y, 0]);
gl.uniform4fv(colorLoc, color);
gl.uniformMatrix4fv(matrixLoc, false, mat);
gl.drawArrays(gl.TRIANGLES, 0, numEdgePoints * 3);
}
drawCircle( 50, 75, [1, 0, 0, 1]);
drawCircle(150, 75, [0, 1, 0, 1]);
drawCircle(250, 75, [0, 0, 1, 1]);
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
Some devs might look at that and think Canvas caches the buffer so it can just reuse the points on the 2nd draw call. It's possible that's true but I kind of doubt it. Why? Because of the genericness of the canvas api. fill, the function that does all the real work doesn't know what's in the internal buffer of points. You can call arc, then moveTo, lineTo, then arc again, then call fill. All of those points will be in the internal buffer of points when we get to fill.
const ctx = document.querySelector('canvas').getContext('2d');
ctx.beginPath();
ctx.moveTo(50, 30);
ctx.lineTo(100, 150);
ctx.arc(150, 75, 30, 0, Math.PI * 2);
ctx.fill();
<canvas></canvas>
In other words, fill needs to always look at all the points. Another thing, I suspect arc tries to optimize for size. If you call arc with a radius of 2 it probably generates less points than if you call it with a radius of 2000. It's possible canvas caches the points but given the hit rate would likely be small it seems unlikely.
In any case, the point is WebGL let's you take control at a lower level allowing you skip steps that canvas can't skip. It also lets you reuse data that canvas can't reuse.
In fact if we know we want to draw 10000 animated circles we even have other options in WebGL. We could generate the points for 10000 circles which is a valid option. We could also use instancing. Both of those techniques would be vastly faster than canvas since in canvas we'd have to call arc 10000 times and one way or another it would have to generate points for 10000 circles every single frame instead of just once at the beginning and it would have to call gl.drawXXX 10000 times instead of just once.
Of course the converse is canvas is easy. Drawing the circle took 3 lines of code. In WebGL, because you need to setup and write shaders it probably takes at least 60 lines of code. In fact the example above is about 60 lines not including the code to compile and link shaders (~10 lines). On top of that canvas supports transforms, patterns, gradients, masks, etc. All options we'd have to add with lots more lines of code in WebGL. So canvas is basically trading ease of use for speed over WebGL.
Canvas does not execute a pipeline of layers of processing to transition sets of vertices and indices into triangles which then are given textures and lighting all in hardware as does OpenGL/WebGL ... this is the root cause of such speed differences ... Canvas counterparts to such formulations are all done on CPU with only the final rendering sent to the graphics hardware ... speed differences are particularly evident when massive number of such vertices are attempted to be synthesized/animated on Canvas versus WebGL ...
Alas we are on the cusp on hearing the public announcement of the modern replacement to OpenGL : Vulkan who's remit includes exposing general purpose compute in a more pedestrian way than OpenCL/CUDA as well as baking in use of multi-core processors which might just shift Canvas like processing onto hardware
Could someone please explain me what are w.r.t. coordinates? or at least direct me to a place that explains what they are? I've being searching for two days or so and all that I found is tutorials on how are they used but not what they actually are or even what wrt stand for.
These tutorials take the assumption I already know what they are which is stressful because I've never heard of them.
I'm working in as3 trying to do some parametric surfaces using pixel particles and I understand these are kind of useful while moving the particles around.
This is the relevant function where they are used as u,v and w, where p is a single particle that also contains xyz values that are not being modified.
function onEnter(evt:Event):void {
dphi = 0.015*Math.cos(getTimer()*0.000132);
dtheta = 0.017*Math.cos(getTimer()*0.000244);
phi = (phi + dphi) % pi2;
theta = (theta + dtheta) % pi2;
cost = Math.cos(theta);
sint = Math.sin(theta);
cosp = Math.cos(phi);
sinp = Math.sin(phi);
//We calculate some of the rotation matrix entries here for increased efficiency:
M11 = cost*sinp;
M12 = sint*sinp;
M31 = -cost*cosp;
M32 = -sint*cosp;
p = firstParticle;
//////// redrawing ////////
displayBitmapData.lock();
//apply filters pre-update
displayBitmapData.colorTransform(displayBitmapData.rect,darken);
displayBitmapData.applyFilter(displayBitmapData, displayBitmapData.rect, origin, blur);
p = firstParticle;
do {
//Calculate rotated coordinates
p.u = M11*p.x + M12*p.y + cosp*p.z;
p.v = -sint*p.x + cost*p.y;
p.w = M31*p.x + M32*p.y + sinp*p.z;
//Calculate viewplane projection coordinates
m = fLen/(fLen - p.u);
p.projX = p.v*m + projCenterX;
p.projY = p.w*m + projCenterY;
if ((p.projX > displayWidth)||(p.projX<0)||(p.projY<0)||(p.projY>displayHeight)||(p.u>uMax)) {
p.onScreen = false;
}
else {
p.onScreen = true;
}
if (p.onScreen) {
//we read the color in the position where we will place another particle:
readColor = displayBitmapData.getPixel(p.projX, p.projY);
//we take the blue value of this color to represent the current brightness in this position,
//then we increase this brightness by levelInc.
level = (readColor & 0xFF)+levelInc;
//we make sure that 'level' stays smaller than 255:
level = (level > 255) ? 255 : level;
/*
We create light blue pixels quickly with a trick:
the red component will be zero, the blue component will be 'level', and
the green component will be 50% of the blue value. We divide 'level' in
half using a fast technique: a bit-shift operation of shifting down by one bit
accomplishes the same thing as dividing by two (for an integer output).
*/
//dColor = ((level>>1) << 8) | level;
dColor = (level << 16) | (level << 8) | level;
displayBitmapData.setPixel(p.projX, p.projY, dColor);
}
p = p.next;
} while (p != null)
displayBitmapData.unlock();
}
This is the example I'm using http://www.flashandmath.com/flashcs4/light/
I kinda understand how are they used but I don't get why.
Thanks in advance.
PD: kind of surprised there is not even a tag related to it.
In that Particle3D.as class linked, they have:
//coords WRT viewpoint axes
public var u:Number;
public var v:Number;
public var w:Number;
From the code example you posted to the question it becomes clear that coords WRT viewpoint axes means coordinates with respect to viewpoint axes, since the code is doing exactly that .
What they are doing is a Camera (or Viewing) Transformation, where the Particle's world coordinates (x,y,z) is transformed from the world coordinate system to coordinates in the camera (or view) coordinate system (u,v,w).
(x,y,z) are the coordinates of the particle in the world coordinate system
(u,v,w) are the coordinates of the particle in the camera coordinate system
For example, the world coordinate system might have an origin at (0,0,0) with the camera positioned at something like (5,3,6) with an lookat vector of (1,0,0) and up vector of (0,1,0).
I work at a truck game with libgdx and box2d.
In my game 1 meter = 100 pixels.
My 2d terrain is generated by me, and is made by points.
What I did, is made a polygonregion for the whole polygon and used texturewrap.repeat.
The problem is that, my game size is scaled down by 100 times, to fit the box2d units.
So my camera width is 800 / 100 and height 480 / 100. (8x4.8 pixels)
How I created my polygon region
box = new Texture(Gdx.files.internal("box.png"));
box.setFilter(TextureFilter.Linear, TextureFilter.Linear);
box.setWrap(TextureWrap.Repeat, TextureWrap.Repeat);
TextureRegion region = new TextureRegion(box);
psb = new PolygonSpriteBatch();
float[] vertices = new float[paul.size];
for (int i = 0; i < paul.size; i++) {
vertices[i] = paul.get(i);
if (i % 2 == 1)
vertices[i] += 1f;
}
EarClippingTriangulator a = new EarClippingTriangulator();
ShortArray sar = a.computeTriangles(vertices);
short[] shortarray = new short[sar.size];
for (int i = 0; i < sar.size; i++)
shortarray[i] = sar.get(i);
PolygonRegion pr = new PolygonRegion(region, vertices, shortarray);
System.out.println(vertices.length + " " + shortarray.length);
ps = new PolygonSprite(pr);
Now i'll just draw the polygonsprite to my polygonsprite batch.
This will render the texture on the polygon repeatedly, but the picture will be 100 times bigger and is very streched.
The left example is the one that i want to make, and the right one is the way that my game looks..
This PR was merged which looks like it does what you want:
https://github.com/libgdx/libgdx/pull/3799
See RepeatablePolygonSprite.
I am not completely sure if this will solve your problem (can't test it right now), but you need to set the texture coordinates of your TextureRegion to a higher value, probably your factor of 100.
So you could try to use region.setU2(100) and region.setV2(100). Since Texture Coordinates usually go from [0,1], the values higher than that will be outside. And because you set the TextureWrap to repeat, this will then repeat your texture over and over.
This way, the TextureRegion will alredy show your one texture repeated 100 times in x and y direction. If you then tell the PolygonSprite to use that region, it should show it as in the image you posted.
Hope that helps... :)
You could create a new texture by code. Take your level size and fill it with your texture then delete to background the top side. Look at pixelmap. Maybe this will help you.
Edit:
TextureRegion doesn't repeat to fit the size you even use texture, or you use TiledDrawable.
Hey guys hoping someone can help here. I am cropping a bitmap image into tiles using the following function:
function crop( _x:Number, _y:Number, _width:Number, _height:Number, callingScope:MovieClip, displayObject:DisplayObject = null, pixelSnapping:Boolean = false):Bitmap
{
var cropArea:Rectangle = new Rectangle( 0, 0, _width, _height );
var croppedBitmap:Bitmap;
if(pixelSnapping == true)
{
croppedBitmap = new Bitmap( new BitmapData( _width, _height ), PixelSnapping.ALWAYS, true );
}
else
{
croppedBitmap = new Bitmap( new BitmapData( _width, _height ), PixelSnapping.NEVER, true );
}
croppedBitmap.bitmapData.draw( (displayObject!=null) ? displayObject : callingScope.stage, new Matrix(1, 0, 0, 1, -_x, -_y) , null, null, cropArea, true );
return croppedBitmap;
}
The width and height being passed in is a non integer value, for instance 18.75. When the new BitmapData is created it always rounds down the value to an integer, since the arguments for BitmapData are typed as such. In the case of my needs here, the width and height will not likely ever be integers. Is there a way to create these bitmap pieces of another image at the exact width and height I need or can a new bitmapData only be created with integer values for height and width?
Thanks for any insight.
EDIT: I realize you can't have a fraction of a pixel, but... What I am trying to achieve is dividing an image into tiles. I want the amount of tiles to be variable, say 4 rows by 4 columns, or 6 rows by 8 columns. the division of an image into X number of parts results in widths and heights in most cases to be non integar values like 18.75 for example. The goal is to divide an image up into tiles, and have that image appear, assembled seamlessly, above the source image, where I would then manipulate the individual tiles for various purposes (puzzle game, tiled animation to new scene, etc). I need the image, when assembled from all the tile pieces, to be an exact copy of the original image, with no lines between tiles, or white edges anywhere, and I need this to happen with non integer widths and heights for the bitmapData pieces that comprise the tiles. Does that makes sense? O_o
A BitmapData can only be created with integer values for the dimensions.
And to be honest, I'm trying to think what a BitmapData object with floating number values for dimensions would be like, but my brain starts to scream in pain: DOES NOT MAKE SENSE!
My brain's a bit melodramatic sometimes.
-- EDIT --
Answer to your edited question:
2 options:
1/ Just copy the original full bitmap data as many times as you have tiles and then use masks on bitmap objects to show the appropriate parts of the tiles
2/make sure no fractional widths or heights are generated by the slicing. For instance if you got an image of width 213 you want to split in 2 rows and 2 columns, then set the width of the left tiles to 106 and the width of the right tiles to 107
Maybe there are other options, but those two seem to me to be the easiest with most chance of success.
If you need to create tiles with preferred size you can use something like that:
private function CreateTilesBySize( bigImage:Bitmap, preferredTileWidth:int, preferredTileHeight:int ):Tiles
{
var result:Tiles = new Tiles();
var y:int = 0;
while ( y < bigImage.height ) {
result.NewRow();
const tileHeight:int = Math.min( preferredTileHeight, bigImage.height - y );
var x:int = 0;
while ( x < bigImage.width ) {
const tileWidth:int = Math.min( preferredTileWidth, bigImage.width - x );
const tile:Bitmap = Crop( x, y, tileWidth, tileHeight, bigImage );
result.AddTile( tile );
x += tileWidth;
}
y += tileHeight;
}
return result;
}
If you need to create exact amount of rows and calls. Then you can use:
private function CreateTilesByNum( bigImage:Bitmap, cols:int, rows:int ):Tiles
{
const preferredTileWidth:int = Math.ceil( bigImage.width / cols );
const preferredTileHeight:int = Math.ceil( bigImage.height / rows );
return CreateTilesBySize( bigImage, preferredTileWidth, preferredTileHeight );
}
But remember that tiles will have diferent sizes (last tiles in a row and last tiles in a column)