I use ShoeBox to make a SpriteSheet. It creates 2 files, *.txt and *.png. How can I load each texture from those files in libgdx?
I didn't find any information about compatibility of LibGDX and ShoeBox texture atlases. I think it's better to use TexturePacker to pack your textures because it's officially supported.
Related
I have a D3D11 Texture2d with the format DXGI_FORMAT_R10G10B10A2_UNORM and want to convert this into a D3D11 Texture2d with a DXGI_FORMAT_R32G32B32A32_FLOAT or DXGI_FORMAT_R8G8B8A8_UINT format, as those textures can only be imported into CUDA.
For performance reasons I want this to fully operate on the GPU. I read some threads suggesting, I should set the second texture as a render target and render the first texture onto it or to convert the texture via a pixel shader.
But as I don't know a lot about D3D I wasn't able to do it like that.
In an ideal world I would be able to do this stuff without setting up a whole rendering pipeline including IA, VS, etc...
Does anyone maybe has an example of this or any hints?
Thanks in advance!
On the GPU, the way you do this conversion is a render-to-texture which requires at least a minimal 'offscreen' rendering setup.
Create a render target view (DXGI_FORMAT_R32G32B32A32_FLOAT, DXGI_FORMAT_R8G8B8A8_UINT, etc.). The restriction here is it needs to be a format supported as a render target view on your Direct3D Hardware Feature level. See Microsoft Docs.
Create a SRV for your source texture. Again, needs to be supported as a texture by your Direct3D Hardware device feature level.
Render the source texture to the RTV as a 'full-screen quad'. with Direct3D Hardware Feature Level 10.0 or greater, you can have the quad self-generated in the Vertex Shader so you don't really need a Vertex Buffer for this. See this code.
Given your are starting with DXGI_FORMAT_R10G10B10A2_UNORM, then you pretty much require Direct3D Hardware Feature Level 10.0 or better. That actually makes it pretty easy. You still need to actually get a full rendering pipeline going, although you don't need a 'swapchain'.
You may find this tutorial helpful.
I plan to use TextureAtlas in my open world 2d game.
I need to load textures dynamically (because there are thousands of them, so I cannot load all at once). I plan to load textures that are needed at specific moment in gameplay. Also for some reasons I cannot have many texture atlases per map location.
Generally, I must avoid situation that I read all textures (entire atlas), because RAM usage will be too large. How the TextureAtlas work? Is is possible to keep the atlas open during entire game, but read from the atlas (to the RAM) only chosen textures when needed without worrying about RAM usage?
Best regards.
You cannot load portions of a TextureAtlas. It is all or nothing. You will have to use multiple atlases and carefully plan what to put in each atlas such that you don’t have to load all of them simultaneously.
A Texture represents a single image loaded into GPU memory for use in OpenGL. A page of a TextureAtlas corresponds to a single Texture. Typically, you will have multiple TextureRegions on each page (Texture) of a TextureRegion. If you don’t, there’s no point in using TextureAtlas. The point of TextureAtlas is to avoid SpriteBatch having to flush vertex data and swap OpenGL textures for every sprite you draw. If you draw consecutive TextureRegions from the same Texture (or atlas page), they are batched together into a single mesh and OpenGL texture so it performs better.
Libgdx doesn’t support loading individual pages of a TextureAtlas though. That would make it too complicated to use. So, if you are doing a lot of loading and unloading, I recommend avoiding multipage atlases. Use AssetManager to very easily load and unload what you need.
I'm trying to write a 2D Game using Starling, and most textures are up to 4k, and I'm getting a resource limit exception.
Is there a way or an algorithm, idea, to get HD Textures and use them with that limited texture resources in stage3d ? Compression ?
Stage3D (which Starling uses underneath) has a max texture size of 2048x2048. If your textures are larger than this, then you are going to have to split them and stitch them together at runtime.
If you find yourself running out of memory (rather than the dimensional size limit) then you can look into compression using ATF textures.
This is more an "implementation" of technology kind of question.
In old times, when I worked with C language, you could specify to use VGA memory or ram memory for allocation of bitmaps structures, then you could work with them a lot faster.
Now we are in 2013, I create bitmap in AS3, and it is allocated in ram (I've seen no option to use the GPU and 100% of cases im sure it is using the RAM, because it increases exactly the expected bitmap size.
¿Is there any option to use GPU memory?
Thanks
Check out the API docs for flash.display3D.Texture - there are 3 methods:
uploadCompressedTextureFromByteArray(data:ByteArray, byteArrayOffset:uint, async:Boolean = false):void
Uploads a compressed texture in Adobe Texture Format (ATF) from a ByteArray object.
uploadFromBitmapData(source:BitmapData, miplevel:uint = 0):void
Uploads a texture from a BitmapData object.
uploadFromByteArray(data:ByteArray, byteArrayOffset:uint, miplevel:uint = 0):void
Uploads a texture from a ByteArray.
So you can't allocate the memory directly in the GPU. You must upload data from a ByteArray or BitmapData, which first exists in RAM. However, to minimize CPU RAM usage, you could potentially reuse a single ByteArray or BitmapData in RAM, change its contents, and upload it many times, or release it after loading. But you can't access the contents of GPU memory directly, as far as I know.
As far as "read access", the only way to get data back from the GPU memory (again, a slow workaround) is to draw the Context3D back into a BitmapData via Context3D.drawToBitmapData... basically like a screen grab. The Starling Framework has an example of this functionality via Stage.drawToBitmapData.
Basically, the Stage3D APIs weren't setup so you can easily access the GPU memory.
You cannot allocate GPU memory manually like in other languages, but you can indeed accelerate your graphics using the GPU with different Adobe technologies.
For example if you want GPU accelerated video decoding you should be using StageVideo, or if you want to accelerate 2d or 3d graphics you could use Stage3D.
Unless you want to work in a low level fashion with Stage3D, it is recommended you use an intermediary framework.
For 2d the best solution is by far Starling. It is a solid framework endorsed by Adobe which has been used in countless commercial projects and is constantly optimised.
As for 3d take a look at Flare3D or Away3D.
I am working on a big 3d model. I want to load its textures once all 3d files has loaded and rendered. So that i load textures one by one and apply them on models. Does anyone knows how to achieve runtime textures loading and applying using actionscript away3d.
Thank you
I usually use LoaderMax queues to load all my assets. You could setup one that loads all your bitmaps. You can then listen for individual items load complete or all items load complete and then create your materials and assign them to your meshes.
Also, you need to recursively assign the material to a mesh's SubMeshes ( see subMeshes : Vector. in the Mesh Class) for it to update.