I'm trying to run DFT in 4x4 blocks over this image (look closely, it's small) It may be a bit too small to see, but it's a 4x12 pixel image. The first 4x4 square is a checkerboard pattern with each square having one pixel, the second 4x4 square is the same pattern with each square having two pixels, and the last 4x4 square is a black square.
The problem I have is that the frequency components I get are not what I expect at all. For example, for the first square I expect to have a DC component in the matrix, but it's not there. I figure I must be doing something wrong but I'm new to EMGU so I'm not sure what. Below is my code.
using (Image<Bgr, byte> image = new Image<Bgr, byte>(Openfile.FileName))
using (Image<Gray, float> gray = image.Convert<Gray, float>())
{
int numRectanglesPerRow = image.Width / WIDTH;
int numRectanglesPerColumn = image.Height / HEIGHT;
for (int i = 0; i < numRectanglesPerColumn; i++)
{
for (int j = 0; j < numRectanglesPerRow; j++)
{
Rectangle rectangle = new Rectangle(WIDTH * j, HEIGHT * i, WIDTH, HEIGHT);
Image<Gray, float> subImage = gray.Copy(rectangle);
Matrix<float> dft = new Matrix<float>(subImage.Rows, subImage.Cols, 2);
CvInvoke.cvDFT(subImage, dft, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, -1);
//The Real part of the Fourier Transform
Matrix<float> outReal = new Matrix<float>(subImage.Size);
//The imaginary part of the Fourier Transform
Matrix<float> outIm = new Matrix<float>(subImage.Size);
CvInvoke.cvSplit(dft, outReal, outIm, IntPtr.Zero, IntPtr.Zero);
}
}
}
Related
I know that there are many questions related to this, but none are relevant to my problem.
So, I have a TiledMap which has 2 layers for boxes in my game - one for the box graphics and one for the box objects. Now, I want to get the textures of all the boxes because I want to be able to draw them anywhere freely. So I wrote some code to find out which tiles a given rectangle (attained from the object layer of the boxes) is covering (this code works fine). Then, from the graphics layer (TileMapTileLayer), I am accessing the textures in those tiles and drawing them to a single texture using Pixmaps. Finally, I store this texture. However, when I render these textures, they are black. Heres the code -
TiledMapTileLayer boxLayer = (TiledMapTileLayer) (map.getLayers().get(2)); // this is the graphics layer
ArrayList<Vector2> cells;
Texture tex;
Pixmap pmap, pmapScld;
int numCols, numRows, cellNum;
int boxInd = 0;
FileTextureData texData;
StaticTiledMapTile tile;
for(RectangleMapObject boxBody : map.getLayers().get(3).getObjects().getByType(RectangleMapObject.class)) { //object layer
bounds = boxBody.getRectangle();
cells = getOccupiedCells(bounds);
numCols = (int) cells.get(0).x; //I have stored the width and height of the texture
numRows = (int) cells.get(0).y; //in tiles in the first Vector2 of the Array
tex = new Texture(numCols * tileWidthOrg, numRows * tileHeightOrg, Format.RGBA8888);
cellNum = 1;
System.out.println(cells);
for(int i = 0 ; i < numCols ; i++) {
for(int j = 0 ; j < numRows ; j++) {
tile = (StaticTiledMapTile) (boxLayer.getCell((int) cells.get(cellNum).x, (int) cells.get(cellNum).y).getTile());
texData = (FileTextureData) tile.getTextureRegion().getTexture().getTextureData();
texData.prepare();
pmap = texData.consumePixmap();
//pmapScld = new Pixmap(tileWidthScld, tileHeightScld, pmap.getFormat());
//pmapScld.drawPixmap(pmap, 0, pmap.getHeight(), pmap.getWidth(), pmap.getHeight(),
//0, pmapScld.getHeight(), pmapScld.getWidth(), pmapScld.getHeight());
tex.draw(pmap, i * tileWidthOrg, j * tileHeightOrg);
pmap.dispose();
texData.disposePixmap();
//pmapScld.dispose();
cellNum++;
}
}
contactListener.addBox(new GBox(world, tex, bounds, boxInd));
tex.dispose();
boxInd++;
}
The code for rendering these boxes -
for(GBox box : contactListener.boxes) {
box.render(sb);
}
public void render(SpriteBatch sb) {
sb.draw(tex, body.getPosition().x - width / 2, body.getPosition().y - height / 2, width, height);
//System.out.println("rendering box " + index + " at position " + body.getPosition());
}
How do I create a simple rounded rectangle button in libgdx without using images? The button should have a drop shadow and should handle pressed state. I want it to be programmatic to make it easy to change the color, style later, etc.
My understanding of your question was how to create a rounded rectangle within the program without having to pre-generate any images outside of code.
I was in a similar situation a while back and I ended writing the function below which generates a Pixmap rounded rectangle based on the parameters (all units are in pixels). It also works with differing alpha values to allow for opacity (which is why there are two Pixmap objects used).
The resulting Pixmap can then easily be passed to a constructor of a Texture if you find that easier to render with.
public static Pixmap createRoundedRectangle(int width, int height, int cornerRadius, Color color) {
Pixmap pixmap = new Pixmap(width, height, Pixmap.Format.RGBA8888);
Pixmap ret = new Pixmap(width, height, Pixmap.Format.RGBA8888);
pixmap.setColor(color);
pixmap.fillCircle(cornerRadius, cornerRadius, cornerRadius);
pixmap.fillCircle(width - cornerRadius - 1, cornerRadius, cornerRadius);
pixmap.fillCircle(cornerRadius, height - cornerRadius - 1, cornerRadius);
pixmap.fillCircle(width - cornerRadius - 1, height - cornerRadius - 1, cornerRadius);
pixmap.fillRectangle(cornerRadius, 0, width - cornerRadius * 2, height);
pixmap.fillRectangle(0, cornerRadius, width, height - cornerRadius * 2);
ret.setColor(color);
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
if (pixmap.getPixel(x, y) != 0) ret.drawPixel(x, y);
}
}
pixmap.dispose();
return ret;
}
Using this function it shouldn't be too difficult to make your own wrapper object (e.g. RoundedRectangle) which would re-draw the image every time one of the parameters was changed and needed to be rendered.
I'm trying to achieve a split effect similar to Fruit Ninja, but the object has to be cut where the user swipes. I followed this tutorial and got the split working on box2d bodies, but I can't get the sprites to split correctly. So what I want is for the body to have a sprite, and the sprite to split the same way the body does, I'm using PolygonSprite and the sprites split in the correct shapes and move at the same time with the bodies, but there's an offset in the image and also it seems that every time I make a cut it will just cut the original texture with no rotation. Here's what I get:
Before I make a cut:
After the first cut (notice that the small triangle is not the top right corner of the original image, and also the image is not centered correctly):
And after I cut the small triangle (notice how it rotated back and didn't cut the actual image):
Here is the method in which I split the object:
private void createSlice(Array<Vector2> vertices, int numVertices, Body affectedBody) {
Vector2 center = findCentroid(vertices, vertices.size);
for (int i = 0; i < numVertices; i++) {
vertices.get(i).sub(center);
}
BodyDef sliceBody = new BodyDef();
sliceBody.position.set(center.x, center.y);
sliceBody.type = BodyDef.BodyType.DynamicBody;
PolygonShape slicePoly = new PolygonShape();
float[] verticesArray = new float[vertices.size * 2];
int j = 0;
for (int i = 0; i < verticesArray.length; i+=2) {
verticesArray[i] = vertices.get(j).x;
verticesArray[i+1] = vertices.get(j++).y;
}
slicePoly.set(verticesArray);
FixtureDef sliceFixture = new FixtureDef();
sliceFixture.shape = slicePoly;
sliceFixture.density = 1;
Body worldSlice = mWorld.createBody(sliceBody);
worldSlice.createFixture(sliceFixture);
for (int i = 0; i < numVertices; i++) {
vertices.get(i).add(center);
}
// create polygon sprite
float[] scaledVertices = new float[verticesArray.length];
for (int i = 0; i < verticesArray.length; i++) {
scaledVertices[i] = verticesArray[i] * Constants.BOX_TO_WORLD;
}
short[] triangles = mTriangulator.computeTriangles(scaledVertices).toArray();
TextureRegion bodyPolyTexReg = ((PolygonSprite) affectedBody.getUserData()).getRegion().getRegion();
PolygonRegion polyReg = new PolygonRegion(bodyPolyTexReg, scaledVertices, triangles);
PolygonSprite polySprite = new PolygonSprite(polyReg);
polySprite.setOrigin(0, 0);
// set the sprite as the body's user data
worldSlice.setUserData(polySprite);
}
My guess is that I would have to rotate the vertices to 0 degrees, make the cut, and then rotate them back, but I'm not sure how to do this.
Thanks!
I need a way to find the coordinates of each pixel that have a specific color. I could loop through every pixel in the entire image and check if that matches, but that doesn't seem very efficient. Is there a better way?
You will have to loop through the array as that is the only way to evaluate all the pixels in the canvas.
However, you can use 32-bit integer (typed) array instead of the standard 8-bit array - that will be many times faster:
var idata = ctx.getImageData(0, 0, ctx.canvas.width, ctx.canvas.height),
buffer32 = new Uint32Array(idata.data.buffer),
len = buffer32.length,
i = 0, x, y,
color = 0xffffffff; /// white color (ABGR format on little-endian systems)
for(; i < len; i++) {
if (buffer32[i] === color) {
/// convert i to x/y position here
x = i % idata.width;
y = (i / idata.height)|0;
}
}
Having trawled Stack Overflow and Google it seems to me that there is no way to disable antialiasing when drawing lines on an HTML5 canvas.
This makes for nice looking lines, but causes me a problem when it comes time to applying a paint bucket / flood fill algorithm.
Part of my application requires that users draw on a canvas, freestyle drawing with basic tools like line size, color... and a paint bucket.
Because lines are rendered with antialiasing they are not a consistent color... with that in mind consider the following:
Draw a thick line in black
Decide at some point later that the line should be red
Apply flood fill to black line
My flood fill algorithm fills the bulk of the line with red, but the edges that were antialiased are detected as being outside the area that should be filled... hence remain (greys / blues(?) left over from the black line).
The flood fill algorithm does not incorporate something akin to 'tolerance' like Photoshop does... I have considered something like that but am unsure it would help as I don't think the anti-aliasing does something simple like render grey next to a black line, I think it's more advanced than that and the anti-aliasing takes into consideration the surrounding colors and blends.
Does anyone have any suggestions as to how I can end up with a better paint bucket / flood fill that COMPLETELY flood fills / replaces an existing line or section of a drawing?
If you simply want to change a color of a line: don't use bucket paint fill at all.
Store all your lines and shapes as objects/arrays and redraw them when needed.
This not only allow you to change canvas size without losing everything on it, but to change a color is simply a matter of changing a color property on your object/array and redraw, as well as scaling everything based on vectors instead of raster.
This will be faster than a bucket fill as redrawing is handled in most part internally and not by pixel-by-pixel in JavaScript as is needed with a bucket fill.
That being said: you cannot, unfortunately, disable anti-alias for shapes and lines, only for images (using the imageSmoothingEnabled property).
An object could look like this:
function myLine(x1, y1, x2, y2, color) {
this.x1 = x1;
this.y1 = y1;
this.x2 = x2;
this.y2 = y2;
this.color = color;
return this;
}
And then allocate it by:
var newLine = new myLine(x1, y1, x2, y2, color);
Then store this to an array:
/// globally:
var myLineStack = [];
/// after x1/x2/y1/y2 and color is achieved in the draw function:
myLineStack.push(new myLine(x1, y1, x2, y2, color));
Then it is just a matter of iterating through the objects when an update is needed:
/// some index to a line you want to change color for:
myLineStack[index].color = newColor;
/// Redraw all (room for optimizations here...)
context.clearRect( ... );
for(var i = 0, currentLine; currentLine = myLineStack[i]; i++) {
/// new path
context.beginPath();
/// set the color for this line
context.strokeStyle = currentLine.color;
/// draw the actual line
context.moveTo(currentLine.x1, currentLine.y1);
context.lineTo(currentLine.x2, currentLine.y2);
context.stroke();
}
(For optimizations you can for example clear only the area that needs redraw and draw a single index. You can also group lines/shapes with the same colors and draw then with a single setting of strokeStyle etc.)
You can not always redraw the canvas, you may have used filters that can not be reversed, or just use so many fill and stroke calls it would be impractical to redraw.
I have my own flood fill based on a simple fill stack that paints to a tolerances and does its best to lessen anti-aliasing artifacts. Unfortunately if you have anti-aliasing on repeated fills will grow the filled region.
Below is the function, adapt it as suited, it is a direct lift from my code with comments added.
// posX,posY are the fill start position. The pixel at the location is used to test tolerance.
// RGBA is the fill colour as an array of 4 bytes all ranged 0-255 for R,G,B,A
// diagonal if true the also fill into pixels that touch at the corners.
// imgData canvas pixel data from ctx.getImageData method
// tolerance Fill tolerance range 0 only allow exact same colour to fill to 255
// fill all but the extreme opposite.
// antiAlias if true fill edges to reduce anti-Aliasing artifacts.
Bitmaps.prototype.floodFill = function (posX, posY, RGBA, diagonal,imgData,tolerance,antiAlias) {
var data = imgData.data; // image data to fill;
antiAlias = true;
var stack = []; // paint stack to find new pixels to paint
var lookLeft = false; // test directions
var lookRight = false;
var w = imgData.width; // width and height
var h = imgData.height;
var painted = new Uint8ClampedArray(w*h); // byte array to mark painted area;
var dw = w*4; // data width.
var x = posX; // just short version of pos because I am lazy
var y = posY;
var ind = y * dw + x * 4; // get the starting pixel index
var sr = data[ind]; // get the start colour tha we will use tollerance against.
var sg = data[ind+1];
var sb = data[ind+2];
var sa = data[ind+3];
var sp = 0;
var dontPaint = false; // flag to indicate if checkColour can paint
// function checks a pixel colour passes tollerance, is painted, or out of bounds.
// if the pixel is over tollerance and not painted set it do reduce anti alising artifacts
var checkColour = function(x,y){
if( x<0 || y < 0 || y >=h || x >= w){ // test bounds
return false;
}
var ind = y * dw + x * 4; // get index of pixel
var dif = Math.max( // get the max channel differance;
Math.abs(sr-data[ind]),
Math.abs(sg-data[ind+1]),
Math.abs(sb-data[ind+2]),
Math.abs(sa-data[ind+3])
);
if(dif < tolerance){ // if under tollerance pass it
dif = 0;
}
var paint = Math.abs(sp-painted[y * w + x]); // is it already painted
if(antiAlias && !dontPaint){ // mitigate anti aliasing effect
// if failed tollerance and has not been painted set the pixel to
// reduce anti alising artifact
if(dif !== 0 && paint !== 255){
data[ind] = RGBA[0];
data[ind+1] = RGBA[1];
data[ind+2] = RGBA[2];
data[ind+3] = (RGBA[3]+data[ind+3])/2; // blend the alpha channel
painted[y * w + x] = 255; // flag pixel as painted
}
}
return (dif+paint)===0?true:false; // return tollerance status;
}
// set a pixel and flag it as painted;
var setPixel = function(x,y){
var ind = y * dw + x * 4; // get index;
data[ind] = RGBA[0]; // set RGBA
data[ind+1] = RGBA[1];
data[ind+2] = RGBA[2];
data[ind+3] = RGBA[3];
painted[y * w + x] = 255; // 255 or any number >0 will do;
}
stack.push([x,y]); // push the first pixel to paint onto the paint stack
while (stack.length) { // do while pixels on the stack
var pos = stack.pop(); // get the pixel
x = pos[0];
y = pos[1];
dontPaint = true; // turn off anti alising
while (checkColour(x,y-1)) { // find the bottom most pixel within tolerance;
y -= 1;
}
dontPaint = false; // turn on anti alising if being used
//checkTop left and right if alowing diagonal painting
if(diagonal){
if(!checkColour(x-1,y) && checkColour(x-1,y-1)){
stack.push([x-1,y-1]);
}
if(!checkColour(x+1,y) && checkColour(x+1,y-1)){
stack.push([x+1,y-1]);
}
}
lookLeft = false; // set look directions
lookRight = false; // only look is a pixel left or right was blocked
while (checkColour(x,y)) { // move up till no more room
setPixel(x,y); // set the pixel
if (checkColour(x - 1,y)) { // check left is blocked
if (!lookLeft) {
stack.push([x - 1, y]); // push a new area to fill if found
lookLeft = true;
}
} else
if (lookLeft) {
lookLeft = false;
}
if (checkColour(x+1,y)) { // check right is blocked
if (!lookRight) {
stack.push([x + 1, y]); // push a new area to fill if found
lookRight = true;
}
} else
if (lookRight) {
lookRight = false;
}
y += 1; // move up one pixel
}
// check down left
if(diagonal){ // check for diagnal areas and push them to be painted
if(checkColour(x-1,y) && !lookLeft){
stack.push([x-1,y]);
}
if(checkColour(x+1,y) && !lookRight){
stack.push([x+1,y]);
}
}
}
// all done
}
There is a better way that gives high quality results, the above code can be adapted to do this by using the painted array to mark the paint edges and then after the fill has completed scan the painted array and apply a convolution filter to each edge pixel you have marked. The filter is directional (depending on which sides are painted) and the code too long for this answer. I have pointed you in the right direction and the infrastructure is above.
Another way to improve the image quality is to super sample the image you are drawing to. Hold a second canvas that is double the size of the image being painted. Do all you drawing to that image and display it to the user on another canvas with CTX.imageSmoothingEnabled and ctx.setTransform(0.5,0,0,0.5,0,0) half size, when done and the image is ready half its size manually with the following code (don't rely on canvas imageSmoothingEnabled as it gets it wrong.)
Doing this will greatly improve the quality of your final image and with the above fill almost completely eliminate anti-aliasing artifacts from flood fills.
// ctxS is the source canvas context
var w = ctxS.canvas.width;
var h = ctxS.canvas.height;
var data = ctxS.getImageData(0,0,w,h);
var d = data.data;
var x,y;
var ww = w*4;
var ww4 = ww+4;
for(y = 0; y < h; y+=2){
for(x = 0; x < w; x+=2){
var id = y*ww+x*4;
var id1 = Math.floor(y/2)*ww+Math.floor(x/2)*4;
d[id1] = Math.sqrt((d[id]*d[id]+d[id+4]*d[id+4]+d[id+ww]*d[id+ww]+d[id+ww4]*d[id+ww4])/4);
id += 1;
id1 += 1;
d[id1] = Math.sqrt((d[id]*d[id]+d[id+4]*d[id+4]+d[id+ww]*d[id+ww]+d[id+ww4]*d[id+ww4])/4);
id += 1;
id1 += 1;
d[id1] = Math.sqrt((d[id]*d[id]+d[id+4]*d[id+4]+d[id+ww]*d[id+ww]+d[id+ww4]*d[id+ww4])/4);
id += 1;
id1 += 1;
d[id1] = Math.sqrt((d[id]*d[id]+d[id+4]*d[id+4]+d[id+ww]*d[id+ww]+d[id+ww4]*d[id+ww4])/4);
}
}
ctxS.putImageData(data,0,0); // save imgData
// grab it again for new image we don't want to add artifacts from the GPU
var data = ctxS.getImageData(0,0,Math.floor(w/2),Math.floor(h/2));
var canvas = document.createElement("canvas");
canvas.width = Math.floor(w/2);
canvas.height =Math.floor(h/2);
var ctxS = canvas.getContext("2d",{ alpha: true });
ctxS.putImageData(data,0,0);
// result canvas with downsampled high quality image.