Does GIF specify some form of grayscale format that would not require a palette? Normally, when you have a palette, then you can emulate grayscale by setting all palette entries to gray levels. But with other formats (e.g. TIFF) the grayscale palette is implicit and doesn't need to be saved in the file at all; it is assumed that a pixel value of 0 is black and 255 is white.
So is it possible to create such a GIF? I'm using the giflib C library (5.0.5), if that matters.
I forgot about this question. Meanwhile I would out the answer. The GIF format requires a palette. No way around that.
Related
I am working with images of text which have diagrams in it. My images are basically black and white I do not see why I want colors in my images. I got some decent results with default settings but I want to test on grayscale Images too. I am using this tutorial as the base which is by default using AlexyAB's repo for darknet. I think I have to change the config file as:
channels=3 # I think I have to change it to 0
momentum=0.9
decay=0.0005
angle=0 # A link says that I have to comment these all
saturation = 1.5 # This on
exposure = 1.5 # and this one too
hue=.1 # Should I change it to 0 too?
But there is this link which says that I have to comment hue,saturation,angle,exposure etc. I want to know that:
Do I have to save the images as Grayscale in directory or the code will do it by itself?
some other configuration has to be changed apart from the setting channels=1? Setting hue to 0 is also suggested in this link
Do I need to modify some function which deals with loading the images as given in this link as the load_data_detection function
Just changing channels=1 in the config file should work. If not, then comment out other parameters like angle,hue,exposure,saturation and try again
There are posts online and on the official repository suggesting to edit the channels, hue value, etc.
Here is what works for SURE (I tried),
For training set, either take Grayscale images or convert RGB to Grayscale using openCV.
cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
Train the YOLOv4 AS YOU WOULD FOR RGB, i.e. without messing with channel, hue, etc.
Note:Don't forget to set the steps, batch size, number of channels, etc. as you would do normally for RGB images, just train the model on Grayscale images instead of RGB.
Few personal observations, theoretical explanation of which I couldn't find:
YOLOv4 trained on RGB images, won't work on B&W/Grayscale images.
YOLOv4 trained on B&W/Grayscale images, won't work on RGB.
Hope this helps.
Edit: I'm yet to verify whether this would be computationally more expensive than a model trained with reduced channels. Although it has shown not to reduce/improve the inference times.
In Spark AR Studio I've set a texture to a rectangle, which is a transparent PNG with some text on it. I've applied this texture to a rectangle and have put it in the scene. What I'm trying to achieve here is to use a native Slider UI element to change the non-transparent parts of this PNG which is the "Hello World" obviously.
The best related article I could find is from their own docs: Adding a Color Filter which does the steps below:
Imports the texture in patch editor
Pipes in the RGB output into a color space patch and converts the color space into HSL mode
Unpacks the HSL values
Modifies the values somehow
Packs them again, converts back the color mode into RGB and adds an alpha value.
And finally pipes in the output (modified color values) into the material texture which is controlled as a patch
Here's my setup which is not working, and makes the whole rectangle black instead of changing the non-transparent parts of the texture.
Any ideas on how to fix this? I know I haven't added the slider UI element to my patch editor yet, at this point I'm playing around with the values in Orange groups. If that works I'm gonna let an slider, control those values.
From what I'm trying to understand, you want to change the "Hello world" to different colors using the Slider UI?
Have the rectangle's material texture in the patch editor by selecting the arrow.
Drag and drop your "Hello World" texture into the patch editor
Import the Adjust colors shader from the AR Library
In your patch editor, right-click and search for "Slider UI" and insert (Only works for Instagram thus go to "project" on your toolbar --> "Edit missing properties" and uncheck "Facebook")
Connect your "Hello world" Texture RGBA output to the texture input in the Adjust Colors Shader
Connect your "Slider UI" output to the "Hue" input in the Adjust Colors Shader
Connect your Adjust Colors Shader Output to the material "Diffuse Texture"
Click to see how your patch editor should look like
Maybe it's just your preference but I would change the "Hello World" to black & white to make it more visible as I change the colors (I use Photoshop to desaturate but any editor that works is fine)
I have used opencv to read png files (with no background) and convert them into pygame surface. Here are the two examples of how I did it:
Method 1:
character=pygame.image.load("character.png")
Method 2:
CharacterImage=cv2.imread("character.png")
CharacterImage=convertToRGB(CharacterImage,CharSize,CharSize)
#charSize is desired character size (for example: 50)
#I have created a user defined function as pygame renders in RGB whereas cv2 renders in BGR image format.
CharacterActions.append(rotate(pygame.surfarray.make_surface(CharacterImage),charrot))
#charrot is rotation angle
I understand that I could manually resize images and then use the first method to get the transparent background image. But I want to understand if its possible to obtain the same via second method? I don't want to manually edit so many images resizing them and that's why I want to know if there's a way for the same.
Thanks in Advance
On the images you load using your Method 1 (pygame.image.load()), you should use pygame.transform.scale() and pygame.transform.rotate() to manipulate the loaded image.
To maintain the transparency, you need to keep the alpa of the image you are loading. To do that you need to use .convert_alpha() on the resulting image. You can do that after or before the transform.
For each of these commands I have linked them to the documentation, just click on them so you can read it.
I have two png, and want to generate a gif that the png-1’s alpha will decrease step by step and the png-2 will show.
If I generate some new png what different from png-1's alpha, and add them to gif, I can get what I want, but the gif file was very large.
I want to know that is there a way to generate a gif what I want but just have two frames.
I've noticed that on some sites, a very low resolution version of an image gets displayed underneath the final version before it's done loading, to give the impression that the page is loading faster. How is this done?
This is called progressive JPEG. When you save a picture using a tool like Photoshop you need to specify you want to use this JPEG flavor.
I've found this Photoshop "Save for Web" dialog sample where you will find the whole Progressive option enabled:
What you are asking for depends upon the decoder and display software used. As noted, it occurs in progressive JPEG images. In that type of JPEG, the coefficients are broken down into separate scans.
The decode then needs to update the image in between decoding scans rather than just at the end of the image.
There was more need for this in the days of dial up modems. Unless the image is really large, it is usually faster just to wait and display the whole image.
If you are programming, the display software you use may have an option to update after scans.
Most libraries now use a model where you decode an image file stream into a generic image buffer. Then you display the image buffer. In this model, there generally is no place to display the images on the fly.
In short, you enable this by creating progressive JPEG images. Whether the image displays fading in dependents entire on what is used to display the image.
As an alternative, you can batch optimize all your images using the ImageMagick's convert command like this:
convert -strip -interlace plane input.jpg output.jpg
You can use these other options instead of plane.
Or just prefix the output's filename with PJPEG
convert -strip input.jpg PJPEG:output.jpg
Along with a proper file search or filename expansion (e.g.):
for i in images/*; do
# Your conversion command
done
The strip option is for stripping any profiles or comments, to make the conversion "cleaner". You may also want to set the -quality option to reduce the quality loss.