I am making texture packer in Libgdx in which If I take the size of packer 2,048 * 2,048 which make only one packer image of size 3.14 Mb and If I take the size of packer 1,024 * 1,024 then it makes four packer image and all four packer image total size is 2.94 Mb. Which size of packer I should prefer?
You should prefer 1024*1024
Pros:
This image resolution can be loaded in ldpi phones like Samsung Galaxy Y.
2048*2048 will take 16 MB of graphics memory
while 1024*1024 will take 4MB
so i think u can go with 1024*1024
Also i would like to suggest u that if u are using images with no alpha value importance ( Backgrounds) you can later covert .png to .jpg and edit your pack file with image name .png to .jpg.
Hope it helps u
Related
I'm writing a screen capture application that uses the UWP Screen Capture API.
As a result, I get a callback each frame with a ID3D11Texture2D with the target screen or application's image, which I want to use as input to MediaFoundation's SinkWriter to create an h264 in mp4 container file.
There are two issues I am facing:
The texture is not CPU readable and attempts to map it fail
The texture has padding (image stride > pixel width * pixel format size), I assume
To solve them, my approach has been to:
Use ID3D11DeviceContext::CopyResource to copy the original texture into a new one created with D3D11_USAGE_STAGING and D3D11_CPU_ACCESS_READ flags
Since that texture too has padding, create a IMFMediaBuffer buffer wrapping it with MFCreateDXGISurfaceBuffer cast it to IMF2DBuffer and use IMF2DBuffer::ContiguousCopyTo to another IMFMediaBuffer that I create with MFCreateMemoryBuffer
So I'm basically copying around each and every frame two times, once on the GPU and once on the CPU, which works, but seems way inefficient.
What is a better way of doing this? Is MediaFoundation smart enough to deal with input frames that have padding?
The inefficiency comes from your attempt to Map rather than use the texture as video encoder input. MF H.264 encoder, which is in most cases a hardware encoder, can take video memory backed textures as direct input and this is what you want to do (setting up encoder respectively - see D3D/DXGI device managers).
Padding does not apply to texture frames. In case of padding in frames with traditional system memory data Media Foundation primitives are normally capable to process data with padding: Image Stride and MF_MT_DEFAULT_STRIDE, and other Video Format Attributes.
I am using 12 image buttons in my game, should i make a single texture pack for all buttons or make different .pack file and .png for each and every image button?
You should use the least amount of images possible to reduce the amount of images that need loaded into memory.
"The main reason to use a texture packer is that loading individual
images is expensive. Loading 200 small images would take a lot of
processing time whereas loading 1 image and using portions of that
image would use a considerably smaller amount."
Quote from gamedevelopment.blog
I've just been able to setup a Cocos2d-x for a multiresolution environment. Ie supporting all/most of the available iPhone screen sizes.
My next step is to scrap the individual files (bunch of pngs) and use sprite atlases (aka sprite sheets) instead.
How does this work in Cocos2D-x? Does anyone have a working sample setup?
Once again, Google and existing documentation (well, I don't think that exists) fail me.
For cocos2d-x v3.5...
The cocos2d-x test classes are always a great place to look to see how things can be done with cocos2d-x.
Check out the AppDelegate.cpp in:
Cocos2d-x/cocos2d-x-3.5/tests/cpp-tests/Classes/AppDelegate.cpp.
In that class the window size is tested to see which set of assets should be loaded. That is part of the issue, deciding which assets to load.
For iOS retina, you can check the size of the screen and then set the content scale factor to either 1.0 for standard resolution or 2.0 for retina resolution.
Unfortunately, cocos2d-x doesn't seem to ever detect certain screen sizes and call Director::getInstanct()->setContentScaleFactor(2.0) for you, so I think we have to do this our self.
I have not tested the impact of setting contentScaleFactor on non-apple devices yet...
Check out the code below. Try running it in your AppDelegate::applicationDidFinishLaunching() method to get an idea of how cocos2d-x sees screen pixels and points; the result is not exactly what I expected. The output below is for an iPhone 5 device.
director->setContentScaleFactor(2);
Size winSizePoints = director->getWinSize();
Size winSizePixels = director->getWinSizeInPixels();
Size visibleSize = director->getVisibleSize();
CCLOG("Content scale factor set to 2.0.");
CCLOG("winSizePoints:%.2f,%.2f", winSizePoints.width, winSizePoints.height);
CCLOG("winSizePixels:%.2f,%.2f", winSizePixels.width, winSizePixels.height);
CCLOG("visibleSize:%.2f,%.2f", visibleSize.width, visibleSize.height);
director->setContentScaleFactor(1);
winSizePoints = director->getWinSize();
winSizePixels = director->getWinSizeInPixels();
visibleSize = director->getVisibleSize();
CCLOG("Content scale factor set to 1.0.");
CCLOG("winSizePoints:%.2f,%.2f", winSizePoints.width, winSizePoints.height);
CCLOG("winSizePixels:%.2f,%.2f", winSizePixels.width, winSizePixels.height);
CCLOG("visibleSize:%.2f,%.2f", visibleSize.width, visibleSize.height);
Output of above code:
Content scale factor set to 2.0.
winSizePoints:1136.00,640.00
winSizePixels:2272.00,1280.00
visibleSize:1136.00,640.00
Content scale factor set to 1.0.
winSizePoints:1136.00,640.00
winSizePixels:1136.00,640.00
visibleSize:1136.00,640.00
So it seems we have to check the the screen size and then choose assets.
One option is to then use code like this to decide which assets to load based on if the screen is retina or not. You could use a similar approach to also loading different size background images to deal with different aspect ratios (more aspect ratios below).
float scaleFactor = CCDirector::getInstance()->getContentScaleFactor();
if (scaleFactor > 1.99)
{
SpriteFrameCache::getInstance()->addSpriteFramesWithFile("spriteSheet-hd.plist", "spriteSheet-hd.png");
}
else
{
SpriteFrameCache::getInstance()->addSpriteFramesWithFile("spriteSheet.png", "spriteSheet.plist");
}
For sprite sheets, I highly recommend Texture Packer. Awesome tool that can create SD and HD sheets and .plist files for you with the press of one button.
If you publish to multiple platforms, you will also want to use the macros to detect the device type and use that information in deciding which assets to load.
e.g.
if (CC_TARGET_PLATFORM == CC_PLATFORM_WIN32)
elif CC_TARGET_PLATFORM == CC_PLATFORM_WP8
elif CC_TARGET_PLATFORM == CC_PLATFORM_ANDROID
elif CC_TARGET_PLATFORM == CC_PLATFORM_IOS
elif CC_TARGET_PLATFORM == CC_PLATFORM_MAC)
endif
Another thing to consider is that on low power CPU and RAM devices, loading large assets might make your frames per second (FPS) go to crap. Or worse, the device might not be able to load the asset or could crash the game. So be sure to test/profile on least common denominator devices that you plan to support.
Aspect Ration is another important issue (landscape width:height).
iPad aspect ratio is 4:3
iPhone 4 is 3:2
iPhone 5 & 6 are 16:9
Android throws in a bunch more width:height ratios.
WP8.1 is 15:9... not sure if others.
The point is that you probably don't want to stretch your backgrounds to get them to fill the screen, or have black bars on the edges where assets are not tall or wide enough.
To deal with this I create background assets that are wider than they need to be going off the right side of the screen. That content looks good but is not vital to convey the meaning of the background. That extra content can be clipped on devices that have narrower aspect ratios, i.e. ipad, and is visible on wider aspect ratio devices, i.e. iPhone 6.
The background images are then scaled up/down to match the screenSize height and retain their aspect ratio.
According to docs
Starting with AIR 3 and Flash player 11, the size limits for a
BitmapData object have been removed. The maximum size of a bitmap is
now dependent on the operating system.
I am following up on question to this answer.
It would be nice to get the largest bitmap that OS would permit.
Can I check the RAM available and pick bitmap size accordingly? Or do I need to pick a size like 4096x4096 and stick to it?
update: trying the following: new BitmapData(4096, 4096, transp, 0x00FFFFFF);
Gives me error - Error #2015: Invalid BitmapData. It looks like I'm hitting Flash Player 10 ceiling of 16,777,215 pixels even though I'm compiling and running 11.
You could make a while( true ) where you create new BitmapDatas of the shape new BitmapData( 1, x );
You see, the limit is actually the bitmapdata's width * height, so just increase x ! Don't start from 1, start from 16 million or something. Afterwards you have your width * height limit, which most likely is Y squared. Just do a Math.sqrt( x ) and you will have your limit, assuming you are interested in squares. Otherwise determine a width and your max height will be x / width, rounded down.
I'm drawing a simple square in stage3D, but the quality of the numbers and the edges in the picture is not as high as it should be:
Here's the example with the (little) source code, I've put the most in one file.
http://users.telenet.be/fusion/SquareQuality/
http://users.telenet.be/fusion/SquareQuality/srcview/
I'm using mipmapping, in my shader I use "<2d, miplinear, repeat>", the texture is 256x256 jpg (bigger than on the image), also tried a png, tried "mipnearest" and tried without mipmapping. Anti-alias 4, but 10 or more doesn't help at all...
Any ideas?
Greetings,
Thomas
Are you using antialiasing for backBuffer?
// Listen for when the Context3D is created for it
stage3D.addEventListener(Event.CONTEXT3D_CREATE, onContext3DCreated);
function onContext3DCreated(ev:Event): void
{
var context3D:Context3D = stage3D.context3D;
// Setup the back buffer for the context
context3D.configureBackBuffer(stage.stageWidth, stage.stageHeight,
0, // no antialiasing (values 2-16 for antialiasing)
true);
}
I think that the size of your resource texture is too high. The GPU renders your scene pixel by pixel in the fragment shader. When it renders a pixel of your texture, the fragment shader gets a varying that represents the texture UV. The GPU simply takes the color of the pixel on that UV coordinate of your texture.
Now, when your texture size is too high, you will lose information because two neighboring pixels on the screen will correspond with non-neighboring pixels on the texture resource. For example: if you draw a texture 10 times smaller than the resource, you will get something like this (were each character corresponds with a pixel, in one dimension):
Texture: 0123456789ABCDEFGHIJKLM
Screen: 0AK
I'VE FOUND IT!!!
I went to the Starling forum and found an answer from Daniel from Starling:
"If you're using TRILINEAR, you're already using the best quality available. One additional thing you could try is to set the "antialiasing" value of Starling to a high value, e.g. 16, and see if that helps."
So I came across this article that said trilinear is only used when you put the argument "linear" in your fragment shader, in my example program:
"tex ft0, v0, fs0 <2d, linear, miplinear, repeat>".
Greetings,
Thomas