I have two png, and want to generate a gif that the png-1’s alpha will decrease step by step and the png-2 will show.
If I generate some new png what different from png-1's alpha, and add them to gif, I can get what I want, but the gif file was very large.
I want to know that is there a way to generate a gif what I want but just have two frames.
Related
I have used opencv to read png files (with no background) and convert them into pygame surface. Here are the two examples of how I did it:
Method 1:
character=pygame.image.load("character.png")
Method 2:
CharacterImage=cv2.imread("character.png")
CharacterImage=convertToRGB(CharacterImage,CharSize,CharSize)
#charSize is desired character size (for example: 50)
#I have created a user defined function as pygame renders in RGB whereas cv2 renders in BGR image format.
CharacterActions.append(rotate(pygame.surfarray.make_surface(CharacterImage),charrot))
#charrot is rotation angle
I understand that I could manually resize images and then use the first method to get the transparent background image. But I want to understand if its possible to obtain the same via second method? I don't want to manually edit so many images resizing them and that's why I want to know if there's a way for the same.
Thanks in Advance
On the images you load using your Method 1 (pygame.image.load()), you should use pygame.transform.scale() and pygame.transform.rotate() to manipulate the loaded image.
To maintain the transparency, you need to keep the alpa of the image you are loading. To do that you need to use .convert_alpha() on the resulting image. You can do that after or before the transform.
For each of these commands I have linked them to the documentation, just click on them so you can read it.
I am using 12 image buttons in my game, should i make a single texture pack for all buttons or make different .pack file and .png for each and every image button?
You should use the least amount of images possible to reduce the amount of images that need loaded into memory.
"The main reason to use a texture packer is that loading individual
images is expensive. Loading 200 small images would take a lot of
processing time whereas loading 1 image and using portions of that
image would use a considerably smaller amount."
Quote from gamedevelopment.blog
I have straight alpha image files but it needs to become PMA before getting blended. I have a condition that I cannot preprocess the file itself. How can I multiply the color information with alpha on the fly via code before sending it to the SpriteBatch? Currently the textures is of the format TextureRegion.
I saw that I could draw PixMap onto the texture so I can getTextureData then get the PixMap, change it and then draw it back. But I am not sure if that is the most efficient way to do it.
To convert a non-premultiplied alpha image to premultiplied alpha you need to iterate trough each pixel and multiply the colour by the alpha.
color.rgb *= color.a
Does GIF specify some form of grayscale format that would not require a palette? Normally, when you have a palette, then you can emulate grayscale by setting all palette entries to gray levels. But with other formats (e.g. TIFF) the grayscale palette is implicit and doesn't need to be saved in the file at all; it is assumed that a pixel value of 0 is black and 255 is white.
So is it possible to create such a GIF? I'm using the giflib C library (5.0.5), if that matters.
I forgot about this question. Meanwhile I would out the answer. The GIF format requires a palette. No way around that.
So, I'm trying to make my flash "games" run more smoothly. I am using individual PNG files for each of my objects in order to create player animations.
I've heard from some places that using individual files like that is bad.
I heard about using sprite sheets in order to compress data and reduce memory usage.
Maybe I have it wrong, but is there a way to merge all of my PNG images (with transparency) together in such a way that flash can continue to use the images individually?
I am really looking for ways to make my programs run more smoothly in order to be able to have lots of images on screen without much lag. Any ideas on how I can make things run better?
Here is an example of a tile based game I'm trying to make that is having serious lag issues.
TexturePacker allows merge png files. It generates two files: png and config file. Png is just merged images and config file is txt file which you can load into your swf, parse and demerge your images using it. Config could be in various formats for different game engines.
Using Photoshop or similar software you would combine all of the animations frames into one file. The size and shape of the file can be whatever you want, but each of the 'frames' should be the same size, in the same order and with no space between them. For example, lets say each frame is 25x25px, your walk animation is 10 frames and you want the final .png to be one long strip. You would make a new .png with the dimensions of either 250X25 or 25X250 and then insert all of your frames into that one file in the order of the animation. It's up to you if you want to embed these as display object or files that get loaded, but once you have them you just need to use BitmapData to break up the input file into new BitmapData objects and then display them as needed. Going one step further, lets say that most if not all characters have a walk animation and an action animation, you would make a single class to deal with loading character animations and the first row of the image file would be the walk animation and the second would be the action animation.