Game hangs when there is too many Sprites in TextureAtlas - libgdx

To render crisp graphics with sprites, I decided to provide a dedicated sprite folder for every resolution size. I have 55 such folders, each folder contains 57 .png files (which means the total number is 57*55 = 3135 files). I put all 55 folders into a single folder, and run Texture Packer on that folder. In the output folder, I get 1 pack.atlas files and 212 packX.png files (X runs from 1 to 212)
in my code, I create these fields in my Assets class
private static TextureAtlas atlas;
private static Sprite logo;
// ... total of 57 Sprite fields....
And in the initialize method of the Assets class, which is called at the beginning of the game:
atlas = new TextureAtlas(Gdx.files.internal("pack.atlas"));
logo = atlas.createSprite(getWidthPath() + "logo"); //getWidthPath() returns the appropriate folder path based on the resolution size. As stated above, there are 55 such folder
//...atlas.createSprite 57 times for 57 fields....
I have done this for a smaller number of folder (3 folder) and the game runs well. However, when I decided to support all 55 resolution folders, the game cannot load on Android and can load on Desktop but really slow at the beginning.
So is it true that the large number of Sprites referenced by the pack.atlas files caused the hang?
I think if I just run TexturePacker on each resolution folder, I will get 55 pack.atlas files (instead of 1) and each pack.atlas file will now reference to 57 sprites (instead of 3135 files), the game should run fine, but it's too laborious

As #noone said, packing all sprites of different resolution sizes into one TextureAtlas destroys the meaning of an atlas. I mustpack one atlas for each resolution. But the problem is how to do it automatically?
After seeing that TexturePacker settings does not allow this, finally I have to modify the source code of class TexturePacker a little bit and build my own custom Libgdx build from the instruction here
I only change the main function to accept one more parameter (true or false) to specify whether or not you should pack into subfolders of the input folder. It's not a conventional way to specify options but it's enough for my case.
here is the modified main function in class TexturePacker.java
static public void main (String[] args) throws Exception {
String input = null, output = null, packFileName = "pack.atlas";
boolean optionSeparateFolder = false;
switch (args.length) {
case 4:
packFileName = args[3];
case 3:
optionSeparateFolder = Boolean.parseBoolean(args[2]);
case 2:
output = args[1];
case 1:
input = args[0];
break;
default:
System.out.println("Usage: inputDir [outputDir] [packToSeparateFolder] [packFileName]");
System.exit(0);
}
File inputFile = new File(input);
if (!optionSeparateFolder) {
if (output == null) {
output = new File(inputFile.getParentFile(), inputFile.getName() + "-packed").getAbsolutePath();
}
process(input, output, packFileName);
}
else{
String[] childrenOfInput = inputFile.list();
File outputFile = new File(output);
for (int i=0; i<childrenOfInput.length; i++){
process(inputFile.getAbsolutePath()+"/"+childrenOfInput[i],outputFile.getAbsolutePath()+"/"+childrenOfInput[i],packFileName);
}
}
}

I think if I just run TexturePacker on each resolution folder, I will get 55 pack.atlas files (instead of 1) and each pack.atlas file will now reference to 57 sprites (instead of 3135 files), the game should run fine, but it's too laborious.
Running the packer 55 times is too laborious, but creating each sprite in 55 different variations is not?
You should come up with a different approach, since this one will not work out. I assume one of your atlas pages is 1024x1024. So roughly 1MB. Having 212 such pages would result in 212MB just for the sprites. And the user will only use ONE such set of sprites, resulting actively using 1MB, but still carrying 211MB of useless other sprite sets around.
Furthermore by not grouping your assets in any sane way (like 1 atlas for each resolution) you completely destroy the purpose of an atlas. The TextureAtlas is used to reduce the amount of OpenGL texture bindings which reduce performance. But in your case the resulting atlas has all needed assets anywhere in those 212 pages. That means while rendering, behind the scenes you will still have lots of texture bindings, because of those 212 pages and scattered sprites.
You might have a look at Viewport. It helps you dealing with different resolutions and there are many strategies implemented. You should rather use this approach than supplying one set of sprites for each possible resolution. If you want pixel perfect sprite rendering for each resolution, you will still need those 55 different variations, but then you need to repack 55 versions of your app instead of packing 55 versions in one single app.

Related

LibTiff C# - Getting coordinates

I am using the LibTiff.NET library to load GeoTiff data in C# (inside Unity).
**NOTE - I looked at GDAL also, but faced similar issues as outlined below, and would much prefer to use LibTiff if possible.
I would ultimately like to be able to take a lat/long value and have a function that returns a chunk of pixel data for a 50m area around that point, streamed from a GeoTiff image on disk (not storing whole image in RAM).
I have a test file that is representative of what my software will be given in production.
I am trying to figure out how to read or compute the lat/long extents of the test file image, as I can't find a good tutorial or sample online which contains this functionality.
I can read the width+height of the file in the TiffTags, but many other values that seem critical for computing the extents such as X and Y resolutions are not present.
It also appears like the lat/long extents (or a bounding box) are not present in the tags.
At this point I am led to believe there may be more tags or header data that I am not familiar with because when I load the test file into Caris EasyView I can see a number of properties that I would like to read or compute from the file:
Is it possible to obtain this data using LibTiff?
Or is there a better system I should use? (wrapped GDAL maybe?)
** NOTE: I cannot link the test file due to NDA, plus it's enormous
This is for a 32-bit geotiff:
int width = tiff.GetField(TiffTag.IMAGEWIDTH)[0].ToInt();
int height = tiff.GetField(TiffTag.IMAGELENGTH)[0].ToInt();
int samplesPerPixel = tiff.GetField(TiffTag.SAMPLESPERPIXEL)[0].ToInt();
int bitsPerSample = tiff.GetField(TiffTag.BITSPERSAMPLE)[0].ToInt();
int bytesPerSample = bitsPerSample / 8;
byte[] scanline = new byte[tiff.ScanlineSize()];
float[] scanline32Bit = new float[tiff.ScanlineSize() / 4];//divide by 4 for byte word
FieldValue[] modelTiePointTag = tiff.GetField(TiffTag.GEOTIFF_MODELTIEPOINTTAG);
byte[] modelTransformation = modelTiePointTag[1].GetBytes();
double originLon = BitConverter.ToDouble(modelTransformation, 24);
double originLat = BitConverter.ToDouble(modelTransformation, 32);
FieldValue[] modelPixelScaleTag = tiff.GetField(TiffTag.GEOTIFF_MODELPIXELSCALETAG);
byte[] modelPixelScale = modelPixelScaleTag[1].GetBytes();
double pixPerLong = BitConverter.ToDouble(modelPixelScale, 0);
double pixPerLat = BitConverter.ToDouble(modelPixelScale, 8) * -1;

OpenAI gym: How to get pixels in CartPole-v0

I would like to access the raw pixels in the OpenAI gym CartPole-v0 environment without opening a render window. How do I do this?
Example code:
import gym
env = gym.make("CartPole-v0")
env.reset()
img = env.render(mode='rgb_array', close=True) # Returns None
print(img)
img = env.render(mode='rgb_array', close=False)
# Opens annoying window, but gives me the array that I want
print(img.shape)
PS. I am having a hard time finding good documentation for OpenAI gym. Is it just me, or does it simply not exist?
Edit: I don't need to ever open the render video.
I was curious about same so I started looking into the source code and this is what I found.
Open AI uses pyglet for displaying the window and animations.
For showing the animation everything is drawn on to window and then rendered.
And then pyglet stores what is being displayed on to a buffer.
Dummy version of how code is written in open AI
import pyglet
from pyglet.gl import *
import numpy as np
display = pyglet.canvas.get_display()
screen = display.get_screens()
config = screen[0].get_best_config()
pyglet.window.Window(width=500, height=500, display=display, config=config)
# draw what ever you want
#get image from the buffer
buffer = pyglet.image.get_buffer_manager().get_color_buffer()
image_data=buffer.get_image_data()
arr = np.frombuffer(image_data.get_data(),dtype=np.uint8)
print(arr)
print(arr.shape)
output:
[0 0 0 ... 0 0 0]
(1000000,)
so basically every image we get is from buffer of what is being displayed on the window.
So if we don't draw anything on window we get no image so that window is required to get the image.
so you need to find a way such that windows is not displayed but its values are stored in buffer.
I know its not what you wanted but I hope it might lead you to a solution.
I've just gone through half of the gym source code line by line, and I can tell you that 1, the observation space of cartpole is digits to the ai, not pixels. eg, from their cartpole env py file...
Observation:
Type: Box(4)
Num Observation Min Max
0 Cart Position -2.4 2.4
1 Cart Velocity -Inf Inf
2 Pole Angle -0.209 rad (-12 deg) 0.209 rad (12 deg)
3 Pole Angular Velocity -Inf Inf
So, the pixels are for you at this point. And 2, if your goal is to teach the ai on pixels, you will need to render images from your data-in array, then pass them THROUGH the observation space as a pixel array, like Maunish Dave shows. OpenAI's Atari version does this.
If you want a better guide, don't read the OpenAI Docs, read the Stable Baseline docs here: https://stable-baselines.readthedocs.io/
Someone offers an answer here:
https://github.com/openai/gym/issues/374
"The atari and doom environments give pixels in their observations (ie, the return value from step). I don't think any other ones do.
render produces different results on different OSes, so they're not part of any official environment for benchmarking purposes. But if you want to create a new environment where the observation is in pixels, you could implement it by wrapping an existing environment and calling render."
I'm also working on getting raw pixels as well and I'm trying to find a way to see if what has been returned is what I expect it is.
The documentation can be found:
https://gym.openai.com/docs
And a forum for discussing OpenAI:
discuss.openai.com
Although its not very lively.
I have faced the similar problem:
This is how fixed it, in rendering.py file at /gym/envs/classic_control find the following line in the Viewer class:
self.window = pyglet.window.Window(width=width, height=height, display=display)
Change this line to:
self.window = pyglet.window.Window(width=width, height=height, display=display, visible=False)
Hope it helps!!

Do we need to add a texture to the texture cache to get the benefits of autobatching?

Normally, when one loads the sprite frame cache from a file by calling:
SpriteFrameCache::getInstance()->addSpriteFramesWithFile(filename);
internally, the texture corresponding to that file is added to the texture cache by calling:
Texture2D *texture = Director::getInstance()->getTextureCache()->addImage(texturePath.c_str());
which is basically creating a Texture2D object from that image and storing it in an unordered_map.
This does not happen internally when I generate my texture on the fly, and add sprite frames to the frame cache by calling the code below within a loop:
I generate a texture on the fly, and add sprite frames to the SpriteFrameCache by doing:
SpriteFrame* frame;
if (!isRotated) {
frame = SpriteFrame::createWithTexture(texture, rect, isRotated, offset, originalSize);
}else{
frame = SpriteFrame::createWithTexture(texture, rect, isRotated, offset, originalSize);
}
SpriteFrameCache::getInstance()->addSpriteFrame(frame, frameName);
It seems that no calls are made internally to addImage() in the texture cache, when I add frames this way (by calling addSpriteFrame()), even though all the sprite frames are using the same texture.
The counter on the bottom left that displays the number of openGL calls says there are only 2 calls, regardless of how many frames I add to the screen.
When calling
p Director::getInstance()->getTextureCache()->getCachedTextureInfo()
I get the output:
(std::__1::string) $0 = "\"/cc_fps_images\" rc=4 id=254 999 x 54 # 16
bpp => 105 KB\nTextureCache dumpDebugInfo: 1 textures, for 105 KB
(0.10 MB)\n"
Which is the texture that shows the fps rate.... so there is no sign of my texture, but at the same time there is no problem adding frames that use that texture.
So my question is: Will there be a performance problem later on because of this ? Should I add the texture to the texture cache manually ? Are there any other problems that I may encounter by adding my sprite frames this way ?
Also, my texture is created by using Texture2D* tex = new Texture2D(), and then initWithData(). So should I keep a reference to this pointer, and call delete later ? Or is it enough to just call removeUnusedTextures?
So my question is: Will there be a performance problem later on
because of this ?
It depends how many times you'd be using this texture.
Should I add the texture to the texture cache manually ?
Again, it depends how many you'd use it. If it's created dynamically few or more times caching will improve performance as you don't have to recreate it again and again.
Are there any other problems that I may encounter by adding my sprite
frames this way ?
I don't think so.
Also, my texture is created by using Texture2D* tex = new Texture2D(),
and then initWithData(). So should I keep a reference to this pointer,
and call delete later ?
Well if you just want to abandon tex (making it local variable), because you created sprite from it you can do it. But sprite simply has a pointer to this texture. If you'll release texture itself it'll disappear (probably will become a black rectangle).
Or is it enough to just call removeUnusedTextures?
This just clears TextureCache map. If your texture isn't here it won't release it.
You'd have to specify use case of this texture. If - let's imagine - you have a texture, which contains a bullet (which you created using initWithData), which is used frequently. You just can have one texture object stored in your scene and you have to create all bullet sprites from this one texture. Using TextureCache it won't be any faster. However you have to remember to release texture memory when you don't need it anymore (for example when you leave scene), because you create Texture2D using new keyword, not create principle (like Sprite::create, Texture2D doesn't have it), which auto manages memory.

How To Store a Dungeon Map

I'm using AS3, but general programming wisdom unspecific to AS3 is great too!
I am creating my first game, a top-down dungeon crawler with tile-based navigation, and I am deciding how to store my maps. I need to be able to access a specific tile at any point in time. My only thought so far is to use nested Vectors or Arrays with the first level being the row and the second being the column, something like this:
private var map:Array = new Array(Array(0,1,0,0,1,1,0),Array(0,1,0,1,0,1,0));
private var row2col3:uint = map[1][2];
/*map would display as such:*/
#|##||#
#|#|#|#
Ultimately, the idea is to build a Map class that will be easily extensible and, again, allow free access to any specific tile. I am looking for help in determining an effective/efficient design architecture for that Map class.
Thanks!
As stated in the comments I would upload and give my source code for a 12 hour challenge project to create a tile based level editor. The source code can be found at: GitHub BKYeates
This level editor focuses on textures being a power of 2, and uses blitting for drawing on all the textures. It can read, write, and store partial tiles. There is also some functionality to erase and draw on collision boxes.
Now in regards to how the storage should be setup, it is really up to you. If you are going to be storing lots of information I recommend using Vectors. Vectors perform faster than most other container types except for ByteArray (if used correctly). In my level editor I used a Vector with a particular setup.
The Vector I used named _map in a class called tilemodel. tilemodel is responsible for updating all the storage information when a change is made. The _map variable is setup like so:
_map = new Vector.<Vector.<Vector.<Object>>>();
This is a pretty heavily nested Vector and in the end stores, can you believe it, an Object! Which admittedly really chunks out the performance gains you get from using Vector when you are indexing the furthest nested elements.
But ignore that because the indexing gain from this setup is really key. The reason it is setup this way is because I can reference a layer, a row, and a column to grab a specific tile object. For example, I have a tile on layer 2 in row 12 column 13 that I want to access:
var tileObject:Object = _map[2][12][13];
That works perfectly for pretty much any scenario I could use in my tile based game, and the speed is comparatively better than that of a Object or Dictionary when this is being accessed multiple times (i.e. - in a loop which happens often).
The level editor is designed to use all blitting and leave onus to my management classes for storage. The speed gain from doing this is very high, and the way it is currently setup the tilemodel can store partial bitmaps making it slightly more flexible than your standard rigidness of a power of 2 texture reader.
Feel free to look through the source code. But here is a summary of what some of the classes do:
tilecontroller - Issues state changes and updates to tilemanager and tilemodel
tilemanager - Responsible for texture drawing and removal.
tilemodel - Stores and updates the current map on state changes.
r_loader - Loads all assets from assetList.txt (paths set to images
there).
hudcontroller - Currently this was the last thing I was working on, lets you draw on collision boxes that are stored in a separate file alongside the map.
g_global & g_keys - Global constants and static methods use
ubiquitously
LevelEditor - Main class, also designed as "View" class ( see MVC pattern: MVC Pattern )
Also as I've mentioned it can read back all the storage. The class used for that I did not upload to GitHub, but figured I would show the important method here:
//#param assets needs to be the list of loaded bitmap images
public function generateMap( assets:* ):void {
var bmd:BitmapData = new BitmapData( g_global.stageWidth, g_global.stageHeight, true, 0 );
_canvas = new Bitmap( bmd, "auto", true );
_mapLayer.addChild( _canvas );
_canvas.bitmapData.unlock();
g_global.echo( "generating map" );
var i:int, j:int, m:int;
for ( m = 0; m < _tiles.length; m++ ) {
for ( i = 0; i < _tiles[m].length; i++ ) {
for ( j = 0; j < _tiles[m][i].length; j++ ) {
//wondering why im type casting in this evaluation? _tiles[i][j].tile == int( _tiles[i][j].tile )
//the level editor stores tiles that are larger than the grid size at indices containing values that are a percent of the tile size
var tile:Object = _tiles[m][i][j];
if ( tile != null && int( tile.tile ) == tile.tile ) {
addTile( g_global.GRIDSIZE * tile.column, g_global.GRIDSIZE * tile.row, { index:tile.tile, bitmap:assets[ tile.tile ] }, tile.rotation );
}
}
}
}
_canvas.bitmapData.lock();
}
Anyway I hope this information finds you well. Good luck!
I asked a similar question a while back: https://gamedev.stackexchange.com/questions/60433/is-it-more-efficient-to-store-my-tile-grid-as-a-dictionary-or-an-array. I'm not sure that it would really matter whether it's an Array or a Vector (the differences in efficiency seem to differ between FP versions, etc.).
But, yeah, you probably want to use one or the other (not a Dictionary or anything), and you probably want to index it like [y * width + x], not [x][y]. Reasons: Efficiency and not having overly complicated data structures.
Also if you need to be able to regularly access the Array or Vector outside of that class, just make the variable internal or public or whatever; making it private and wrapping over it with functions, while being more prim-and-proper class design, would still be overkill.
One method I am using right now for my own thing is that I'm storing my tiles in a black and white pixel bitmap (and wrote a wrapper around that). I'm not sure how efficient this is overall as I've never benchmarked it and just wrote it quickly to create a map for testing purposes, but I am finding that it does offer an advantage in that I can draw my maps in an image editor and view them easily while still allowing random pixel/tile access.
Looking at your sample code, I'm guessing you have only two types of tiles right now, so you could just use black and white pixels as well if you want to try it.
I've done the 2d array method as well (using it still actually for other parts) which works fine too, but perhaps can be harder to visualise at larger sizes. Looking forward to Bennett's answer.

How do I play back a WAV in ActionScript?

Please see the class I have created at http://textsnip.com/see/WAVinAS3 for parsing a WAVE file in ActionScript 3.0.
This class is correctly pulling apart info from the file header & fmt chunks, isolating the data chunk, and creating a new ByteArray to store the data chunk. It takes in an uncompressed WAVE file with a format tag of 1. The WAVE file is embedded into my SWF with the following Flex embed tag:
[Embed(source="some_sound.wav", mimeType="application/octet-stream")]
public var sound_class:Class;
public var wave:WaveFile = new WaveFile(new sound_class());
After the data chunk is separated, the class attempts to make a Sound object that can stream the samples from the data chunk. I'm having issues with the streaming process, probably because I'm not good at math and don't really know what's happening with the bits/bytes, etc.
Here are the two documents I'm using as a reference for the WAVE file format:
http://www.lightlink.com/tjweber/StripWav/Canon.html
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
Right now, the file IS playing back! In real time, even! But...the sound is really distorted. What's going on?
The problem is in the onSampleData handler.
In your wav file, the amplitudes are stored as signed shorts, that is 16 bit integers. You are reading them as 32 bit signed floats. Integers and floats are represented differently in binary, so that will never work right.
Now, the player expects floats. Why did they use floats? Don't know for sure, but one good reason is that it allows the player to accept a normalized value for each sample. That way you don't have to care or know what bitdept the player is using: the max value is 1, and the min value is -1, and that's it.
So, your problem is you have to convert your signed short to a normalized signed float. A short takes 16 bits, so it can store 2 ^ 16 (or 65,536) different values. Since it's signed and the sign takes up one bit, the max value will be 2 ^ 15. So, you know your input is the range -32,768 ... 32,767.
The sample value is normalized and must be in the range -1 ... 1, on the other hand.
So, you have to normalize your input. It's quite easy. Just take the read value and divide it by the max value, and you have your input amplitude converted to the range -1 ... 1.
Something like this:
private function onSampleData(evt:SampleDataEvent):void
{
var amplitude:int = 0;
var maxAmplitude:int = 1 << (bitsPerSample - 1); // or Math.pow(2, bitsPerSample - 1);
var sample:Number = 0;
var actualSamples:int = 8192;
var samplesPerChannel:int = actualSamples / channels;
for ( var c:int = 0; c < samplesPerChannel ; c++ ) {
var i:int = 0;
while(i < channels && data.bytesAvailable >= 2) {
amplitude = data.readShort();
sample = amplitude / maxAmplitude;
evt.data.writeFloat(sample);
i++;
}
}
}
A couple of things to note:
maxAmplitude could (and probably
should) be calculated when you read
the bitdepth. I'm doing it in the
method just so you can see it in the
pasted code.
Although maxAmplitude is calculated
based on the read bitdepth and thus
will be correct for any bitdepth,
I'm reading shorts in the loop, so
if your wav file happens to use a
different bitdepth, this function
will not work correctly. You could
add a switch and read the necessary
ammount of data (i.e., readInt if
bitdepth is 32). However, 16 bits is
such a widely used standard, that I
doubt this is practically needed.
This function will work for
stereo wavs. If you want it to work
for mono, re write it to write the
same sample twice. That is, for each
read, you do two writes (your input
is mono, but the player expects 2
samples).
I removed the EOF catch, as you can
know if you have enough data to read
from your buffer checking
bytesAvailable. Reaching the end of
stream is not exceptional in any
way, IMO, so I'd rather control that
case without an exception handler,
but this is just a personal
preference.