How do I get the width of bitmap using createJS? - html

How do I get the width of a bitmap image in createJS? I need it to change only the width of the image to calculate the scaleX value. By the way, Is there any direct way to change the image width or do I need to use the scaleX property?
Every image is preloaded using preloadJS in the library.
piecesArray[0].val = Math.floor(Math.random() * 50)
piecesArray[0].sym = new lib["a"+ piecesArray[0].val]()
model = new createjs.Bitmap(piecesArray[0].sym);
(function to change width)
stage.addChild(model.image)
I've searched all the variables of model using the console and there is no getBounds(), nor width, nor height, nor nominalBounds...
P.S. This what happens when I test it using the console
model = new createjs.Bitmap(new lib["c"+ 1]());
a {_listeners: null, _captureListeners: null, alpha: 1, cacheCanvas: null, cacheID: 0…}
console.log(model.image)
VM281:2 lib.c1 {spriteSheet: a, paused: true, currentAnimationFrame: 0, _animation: null, currentAnimation: null…}
undefined
console.log(model.image.width)
VM282:2 undefined
undefined

You can get the size of the image via bitmap.image.width, so in your code it would be model.image.width. And no, there is no direct way to set the width of a bitmap - you have to set it via scaleX/scaleY.

Solved it by re-sizing before creating the Bitmap. I think the reason I couldn't get the width value was because the image comes from a spritesheet.
piecesArray[0].sym.scaleX = 232/piecesArray[0].sym.getBounds().width
piecesArray[0].sym.scaleY = 150/piecesArray[0].sym.getBounds().height
model = new createjs.Bitmap(piecesArray[0].sym);

Related

scale font lower than 0.01 is handled as 0.01

Every value I set for .scale() lower than 0.01 will be handled as I set 0.01, rendering a text that is not lower than when I set it to 0.01.
FreetypeFontLoader.FreeTypeFontLoaderParameter sizeParams = new FreetypeFontLoader.FreeTypeFontLoaderParameter();
sizeParams.fontFileName = "MyFont.ttf";
sizeParams.fontParameters.size = (int)Math.ceil(2*MINIMUM_VIEWPORT_SIZE_PIXEL/9f/2f/2f);
sizeParams.fontParameters.color = new Color(Color.Red);
sizeParams.fontParameters.borderColor = new Color(Color.Green);
sizeParams.fontParameters.borderWidth = 2;
sizeParams.fontParameters.minFilter = Texture.TextureFilter.Linear;
sizeParams.fontParameters.magFilter = Texture.TextureFilter.Linear;
assetManager.load("MyFont.ttf", BitmapFont.class, sizeParams);
assetManager.finishLoading();
BitmapFont fontFreeType = assetManager.get("myFont.ttf", BitmapFont.class);
Label.LabelStyle miniLabelStyle = new Label.LabelStyle();
miniLabelStyle.font = fontFreeType;
miniLabelStyle.font.getData().scale(0.005f);
Label labelDebug = new Label("my sample text", game.miniLabelStyle);
I tried this, without any change (either setting true or false):
miniLabelStyle.font.setUseIntegerPositions(false);
I tried this, but the text results so grainy:
labelDebug.setFontScale(0.5f);
How to get a lower scale than 0.01?
This is just a guess, since I don't use FreetypeFontGenerator, but maybe what you're missing is mip mapping, which is necessary for drawing images smaller than their original size without them getting blurry and chunky.
You'll want to make these two settings to enable it, I think:
sizeParams.fontParameters.minFilter = Texture.TextureFilter.MipMapLinearLinear;
sizeParams.fontParameters.genMipMaps = true;
You might also need to add padding around the glyphs so characters don't absorb some of their neighbors when shrunk down. (Settings padTop, padLeft, padBottom, padRight.)
However, there's a reason mip mapping defaults to being disabled. Text looks clearest if it's generated at exactly the size it will be on screen and is rendered with nearest filtering. This isn't universally true, since you might be projecting your text in 3D space or scaling it up and down in real-time. But if it's static text for a GUI, it's probably best to calculate exactly what size it would need to be for it to be one-texture-pixel to one-screen-pixel.

`Autodesk.Viewing.ScreenShot.getScreenShotWithBounds` uses viewer's default camera dimensions instead of bounds

Following this advice I am trying to replace our usage of viewer::getScreenshot with Autodesk.Viewing.ScreenShot.getScreenShotWithBounds which calls Autodesk.Viewing.ScreenShot.getScreenShot to support larger screenshots. However, the bounds are ignored. The behaviour seems to be that it makes a screenshot of the viewer's default camera or corresponding to its dimensions. Then it stretches the image to fit the given width and height. I expected the function to return a screenshot of the elements inside the given bounding box.
Is the function getScreenShotWithBounds supposed to do something different from what I am assuming?
Example code (LMV 7.40.0):
const bbounds: THREE.Bbox3; // calculated for some elements I want to screenshot
Autodesk.Viewing.ScreenShot.getScreenShotWithBounds(NOP_VIEWER, Math.ceil(bbounds.size().x * 4), Math.ceil(bbounds.size().y* 4), (blob) => window.open(blob), {bounds: bbounds, margin: 0});
Manual screenshot of the viewer
Returned image of getScreenShotWithBounds
Update:
I misunderstood the function. Autodesk.Viewing.ScreenShot.getScreenShotWithBounds just fits the bounds into the camera view. The bounds are not used for any cropping. See my more detailed answer.
The given width and height of Autodesk.Viewing.ScreenShot.getScreenShotWithBounds must be of the same aspect ratio as the viewer (see Adam Nagy's answer), i.e.:
getScreenShotWithBounds(NOP_VIEWER, NOP_VIEWER.getDimensions().width * 4, NOP_VIEWER.getDimensions().height * 4, options);
getScreenShotWithBounds just fits the bounds into the camera view (internally it calls viewer.navigation.fitBounds(true, bounds, false);). The bounds are not used for any kind of cropping / pixel calculation or otherwise.
To get a specific aspect ratio by cropping, you have to provide getCropBounds in the options parameter.
For example to force a aspect ratio by cropping:
getCropBounds: function(viewer: Autodesk.Viewing.Viewer3D, camera: Autodesk.Viewing.UnifiedCamera, bounds: THREE.Box3): THREE.Box2 {
// calculate the crop bounds in pixels
// if the crop bounds are larger the screenshot's width / height is taken
const aspectRatio = new THREE.Vector2(4, 3);
const viewerBoundsWidthRatio = width / aspectRatio.x;
const viewerBoundsHeightRatio = height / aspectRatio.y;
const cropToHeight = viewerBoundsWidthRatio > viewerBoundsHeightRatio;
const smallerScaleRatio = cropToHeight ? viewerBoundsHeightRatio : viewerBoundsWidthRatio;
const cropBoundsSize = aspectRatio.clone().multiplyScalar(smallerScaleRatio);
const cropBounds = new THREE.Box2(new THREE.Vector2(0, 0), new THREE.Vector2(cropBoundsSize.x, cropBoundsSize.y));
const offset = cropToHeight ? new THREE.Vector2((width - cropBoundsSize.x) / 2, 0) : new THREE.Vector2(0, (height - cropBoundsSize.y) / 2);
cropBounds.min.add(offset);
cropBounds.max.add(offset);
return cropBounds;
}
The squashed image you are getting is suggesting it, but it's not clear from the comments in the source code, that you should keep the ratio of the Viewer for the width and height input parameters. So you should only scale them depending on your needs: getScreenShotWithBounds(NOP_VIEWER, viewer_width * x, viewer_height * x, etc
Then the bounds in the options should take care of the cropping.

Merge two images and retain its transparency

It was a while since I programmed AS3. Now I have a problem where I need to merge the two images where the upper image is a png that must retain its transparency. The upper image is an area that must pass through the lower image. A bit like a masked layer.
The result of this merge should result in a a display object. This object will later be sent to a method with the following signature:
public function addImage (
display_object:DisplayObject,
x:Number = 0,
y:Number = 0,
width:Number = 0,
height:Number = 0,
image_format:String = "PNG",
quality:Number = 100,
alpha:Number = 1,
resizeMode:String = "None",
blendMode:String = "Normal",
keep_transformation:Boolean = true,
link:String = ''
):void
Any advice is of the utmost interest. Thanks!
UPDATE;
After some struggling I've come up with this:
var bitmapDataBuffer:BitmapData = new BitmapData ( front.loader.width, front.loader.height, true );
bitmapDataBuffer.draw ( front.loader );
var bitmapOverlay:BitmapData = new BitmapData ( front.loader.width, front.loader.height, true );
bitmapOverlay.draw ( frontBanner.loader );
var rect:Rectangle = new Rectangle(0, 0, front.loader.width, front.loader.height);
var pt:Point = new Point(0, 0);
var mult:uint = 0x00;
bitmapOverlay.merge(bitmapDataBuffer, rect, pt, mult, mult, mult, mult);
var bmp:Bitmap = new Bitmap(bitmapOverlay);
pdf.addImage(bmp,0,0,0,0,ImageFormat.PNG,100,1,ResizeMode.FIT_TO_PAGE);
The problem is that my background image (represented by bitmapDataBuffer) will be totally overwritten by my upper image (the one I call overlay).
The overlay image is a png image. This image has a part of it that is transparent. Through this transparency I want to see my background image.
Any more suggestions?
You should be more specific about what kind of merge you want. You have a few options:
BitmapData.copyPixels - Provides a fast routine to perform pixel manipulation between images with no stretching, rotation, or color effects. This method copies a rectangular area of a source image to a rectangular area of the same size at the destination point of the destination BitmapData object.
BitmapData.merge - Performs per-channel blending from a source image to a destination image. For each channel and each pixel, a new value is computed based on the channel values of the source and destination pixels.
BitmapData.draw - Draws the source display object onto the bitmap image, using the Flash runtime vector renderer. You can specify matrix, colorTransform, blendMode, and a destination clipRect parameter to control how the rendering performs.
Each will work out for a different thing - the first will just copy some image over another (can keep/merge alphas). The second will merge channels data and modify them. The third one is the easiest and can draw one bitmap over another, as well as use blend modes.
Just chose one! :)
In order to make overlay image over the buffer image in your case, you are to use copyPixels() with mergeAlpha set to true.
bitmapDataBuffer.copyPixels(bitmapOverlay, rect, new Point(), null, null, true);
This will place data from bitmapOverlay to those parts of bitmapDataBuffer where overlay's alpha is above 0, blending semitransparent regions with the background.

How to change sprite width and cause more texture repetition?

I have sprite with repeated texture and I want to change width frequently of sprite but on that way that cause texture to be repeated more times instead of stay same number of scaled repetition.
This is what happened when I change width with
setScaleX(factor) ;
I also tried to getContentSize() and change width and setContentSize(newContentSize);
but it didn't help, just cause strange behaviour. How to change sprite width and cause more texture repetition ? Is this possible at all ?
( I can remove current sprite and recreate new but it looks like btute force solution and I am looking for something more elegant)
use TexParams,like
Texture2D* tex = Director::getInstance()->getTextureCache()->addImage("CloseNormal.png");
Texture2D::TexParams param = {GL_LINEAR, GL_LINEAR, GL_REPEAT, GL_REPEAT};
tex->setTexParameters(param);
auto size = tex->getContentSize();
auto s = Sprite::createWithTexture(tex, Rect(0,0,size.width * 2, size.height * 2));
s->setPosition(Point(300,300));
this->addChild(s);
or like
auto s = Sprite::create("CloseSelected1.png");
auto size = s->getContentSize();
Texture2D::TexParams param = {GL_LINEAR, GL_LINEAR, GL_REPEAT, GL_REPEAT};
s->getTexture()->setTexParameters(param);
s->setTextureRect(Rect(0, 0, size.width * 2, size.height * 2));
s->setPosition(Point(200,200));
this->addChild(s);

AIR Stage3D- confining GPU filter effects to portion of stage

Using this library to create filter effects in an application. The problem I am having is that the effects are applied to the ENTIRE stage, instead of just a portion.
Does anyone know of a way to define a 'window' or 'viewport' on a Stage3D layer? I checked the documentation for Stage3D but nothing seems exposed that would help.
You can set the width and height in the configureBackBuffer method of Context3D and the x and y in the Stage3D instance:
stage3D.x = stage3D.y = 0;
context3D.configureBackBuffer(width, height, antiAlias, enableDepthAndStencil);
In the lib you're using the width and height is set to match stageWidth/stageHeight. Link to the specific line of code:
https://github.com/inspirit/GPUImage/blob/12d32dab0e479620fd6420dc3fa7fcfe726502d2/examples/GPUImageShowcase.as#L120
stageW = stage.stageWidth;
stageH = stage.stageHeight;
// Setup context
var stage3D:Stage3D = stage.stage3Ds[0];
stage3D.removeEventListener(Event.CONTEXT3D_CREATE, onContextCreated);
context3D = stage3D.context3D;
context3D.configureBackBuffer(
stageW,
stageH,
antiAlias,
enableDepthAndStencil
);