Is there a trick to creating an animated gif of tv static that will allow it to be relatively small? - gif

Apologies in advance, but this isn't really a photoshop question. Rather, I'm trying to come up with something that is convincing but exploits the compression and features of the gif format as best as possible to produce the smallest possible file for the animation.
Some constraints:
It needs to be at least 20 or 30 frames. I've tried with fewer (and since they're largely uncompressable 15 frames is half the size of 30, generally speaking)
Size needs to be no less than about 256x192
It doesn't need to be color though, nor even full grayscale. I've seen convincing stills with as few as about 16 grays
It can have a pattern, but not one that is instantly obvious to the human eye. If someone takes a single frame and after a minute or two can spot the pattern (which makes it compressable?) that's ok
Frames 2 through n can use quite a bit of alpha, but when I started using big horizontal stripes of alpha, it was instantly noticeable to my eyes. So you don't get to rack up a bunch of RLE with the easy cheat.
All of the above and still needs to look good at 30-33ms frame speed. No variable speed or relying on anything significantly faster than that.
Also acceptable: an apng that complies with the above constraints. Possibly even mpeg, if you can come up with that (I'm ignorant of how the DCT does its magic).
Ideally I could get something down in the 250kbyte range, but I'd settle for anything significantly smaller than the 9 meg monstrosity I cooked up last week.
Oh, and one last thing: obviously I don't expect anyone to supply the graphic for me. I'm just looking for some trick(s) that will let me get there myself eventually.

This is a very interesting question.
Static (random noise) by its nature is actually highly incompressible. Information theory says that true noise is basically incompressible, and the more patterns something contains the more compressible it becomes (to the point of a solid line of 1's or 0's being perfectly compressible.
The ideal would be to create a true noise generator (just random numbers), but that doesn't help within the constraints of your problem.
The best thing I can think of is storing a number of small tiles of static and displaying them in staggered fashion to prevent the eye catching on to any patterns. Aside from that, you won't have much luck compressing this beyond 256 x 192 x 20 / 2 or about 500 kilobytes ( assuming 20 frames with resolution of 256 x 192, using 4 bit color depth ).
Simply encoding your animated gif in 16 color mode should get you to that point.

Well old but still unanswered answer (not checked anyway)
so create the NoSignal image data
If it is not obvious how read this:
NoSignal in asm and C++
encode into gif
Had played with it a bit so I used resolution 320x240, the lowest bit resolution usable is 3 bit per pixel. Lower does not look good. Single global palette only (obvious) here 300KB example
[Notes]
if this is just for some app then generate the image on the run it is really just few lines of code see that linked answer in bullet #1

Yes, you can achieve that with a lossy GIF compression, or rather a specifically rigged compressor that outputs noisy LZW stream.
A best-case scenario for LZW compression is to output X pixels, then X+1 pixels, then X+2 pixels, etc. It's easy to make that noisy.
Try screwing up the gfc_lookup function to (almost) always return longest dictionary item and compress series of noisy frames with it:
https://github.com/pornel/giflossy/blob/master/src/gifwrite.c#L270

Not easily normally. Good randomness (high entropy) by definition does not compress well. Having it greyscale may help, but not much.
If you want to do this on a web page and you have (some) control, you can always write a very small bit of JS to help... if you can do this, then you can do the following:
Create a gif about 1.5x the size you need with high-entropy static.
Set the clipping to the size you want.
Then you randomly move it around by changing the starting offset.
As long as your offsets are a decent distance away from one another (and don't repeat patterns) it is usually difficult to discern it as movement, and it looks truly like static.
I did this trick about 20 years ago on an Amiga to emulate static on a limited-memory demo, and it worked remarkably well... it also does not require fast low-level code as all was done by changing offsets and the co-processor bitblit-ed the rest.

Related

Producing many different hashes of a jpg file with minimal change to picture

My goal is to write a program (e.g. in Python or C++) which takes as input a JPG file (e.g. tux.jpg) and make tiny changes to it, such that it outputs many different images (maybe a thousand images or even more), but in a way that all these images, while having different hash, look almost the same visually, i.e. the changes should have the least impact to the original image as possible.
I first though to play around with the jpg header but that might not be enough to make the many thousands of different pictures I want.
As a naive way, I thought to flip a random bit in the file, but that bit can possibly result in a less than desirable result, which can be seen especially in small pictures (e.g. a dark pixel in the white space in the tux picture). Ideally, I would like to change a random pixel with a "neighboring" color, such that the two resulting pictures have almost no visual difference.
For this purpose, I read the JPG codec example but I find it very confusing and hard to understand. Can someone help me what my program should look for as it parses the file in binary format and how to change a random pixel with a "neighboring" color?
You can change the comment part of the file by playing with the file header. A simple way to do that is to use a ready made open source program that allows you to put the comment of your choice, example HLLO repeated 8 times. That should give you 256 bits to play with. You can then determine the place where the HLLO pattern is located in the file using a hex editor. You then load the data in memory and start changing these 32 bytes and calculate the hash each time to get a collision (a hash that matches)
By the time you find a collision, the universe will have ended.
Although in theory doable, it's practically impossible to crack SHA256 in a reasonable amount of time, standard encryption protocols would be over and hackers would be enjoying their time.

cocos2d-x: How to specify a pixel format for a specific Texture2D

I've set my app to use the RGBA4444 pixel format by default for its Texture2D objects.
Texture2D::setDefaultAlphaPixelFormat(Texture2D::PixelFormat::RGBA4444);
This works great -- it has cut my memory footprint in half and 16 bit still looks fine for most images. However, some images are a bit messed up... namely those that are primarily in one colour, such as a particular grassy background texture which uses quite a large number of green shades. It looks absolutely horrible in anything less than 32 bit, and I'd rather not have to create a texture that looks nicer when 16 bit'ified.
The best solution for me would be to be able to set the sprite to use RGBA8888 as an exception. After all, 'setDefaultAlphaPixelFormat()' implies that it can be specified otherwise, right? But I have been as-yet unable to find out how.
Can it be done?
A solution is to change the default format before loading and set it back again afterwards. This is how I did it:
Texture2D::PixelFormat defaultFormat = Texture2D::getDefaultAlphaPixelFormat();
Texture2D::setDefaultAlphaPixelFormat(Texture2D::PixelFormat::RGBA8888);
// Load RGBA8888 textures here
TextureCache::getInstance()->addImage(IMAGE_BACKGROUND_GRASS);
Texture2D::setDefaultAlphaPixelFormat(defaultFormat);
I think this is about as good as it gets, thanks LearnCocos2D. :)

Is there a performance benefit to pre-rendering an HTML5 canvas circle?

I understand it's often faster to pre-render graphics to an off-screen canvas. Is this the case for a shape as simple as a circle? Would it make a significant difference for rendering 100 circles at a game-like framerate? 50 circles? 25?
To break this into two slightly different problems, there are two aspects to what you're asking:
1) is drawing a shape off-screen and putting it on-screen faster
2) is drawing a shape one time and copying it to 100 different places faster than drawing a shape 100 times
The answer to the first one is "it depends".
That's a technique known as "buffering" and it's not really about speed.
The goal of buffering an image is to remove jerkiness from it.
If you drew everything on-screen, then as you loop through all of your objects and draw them, they're updating in real-time.
In the NES-days, that was normal, because there wasn't much room in memory, or much power to do anything about it, and because programmers didn't know much better, with the limited instructions they had to work with.
But that's not really the way games do things, these days.
Typically, they call all of the draw updates for one frame, then they take that whole frame as a finished image, and paste that whole thing on the screen.
The GPU (and GL/DirectX) takes care of this, by default, in a process called "double-buffering".
It's a double-buffer, because there's room for the "in-progress" buffer used for the updates, as well as the buffer that holds the final image from the last frame, that's being read by the monitor.
At the end of the frame processing, the buffers will "swap". The newly full frame will be sent to the monitor and the old frame will be overwritten with the new image data from the other draw calls.
Now, in HTML5, there isn't really access to the frame-buffer, so we do it ourselves; make every draw call to an offscreen canvas. When all of the updates are finished (the image is stable), then copy and paste that whole image to the onscreen canvas.
There is a large speed-optimization in here, called "blitting", which basically copies over only the parts that have changed, and reuses the old image.
There's a lot more to it than that, and there are a lot of caveats, these days, because of all of the special-effects we add, but there it is.
The second part of your question has to do with a concept called "instancing".
Instancing is similar to blitting, but while blitting is about only redrawing what's changed, instancing is about drawing the exact same thing several times in different places.
Say you're painting a forest in Photoshop.
You've got two options:
Draw every tree from scratch.
Draw one tree, copy it, paste it all over the image.
The downside of the second one is that each "instance" of the image looks exactly the same.
If your "template" image changes colour or takes damage, then all instances of the image do, too.
Also, if you had 87 different tree variations for an 8000 tree forest, making instances of them all would still be very fast, but it would take more memory, because you now need to save 87x more images than when it was just one tree, to reference on every draw call.
The upside is that it's still much, much faster.
To answer your specific question about X circles, versus instancing 1 circle:
Yes, it's still going to be a lot faster.
What a "lot" means, though, will change based on a lot of different things, because now you're talking about browsers on PCs.
How strong is the PC?
How good is the videocard?
How large is the canvas in software-pixels (not CSS pixels)?
How large are the circles? Do they have alpha-blending?
Is this written in WebGL or software?
If software is the canvas compositing in hardware mode?
For a typical PC, you should still be able to hit 60fps in Chrome, drawing 20 circles, I think (depending on what you're doing to them... ...just drawing them onscreen, every frame is simple), so in this case, the instances are still a "lot" faster, but it's not going to matter, because you've already passed the performance-ceiling of Canvas.
I don't know that the same would be true on phones/tablets, or battery-powered laptops/netbooks, though.
Yes, transferring from an offscreen canvas is faster than even primitive drawings like an arc-circle.
That's because the GPU just copies the pixels from the offscreen canvas (not much CPU effort required)

html5 svg vs canvas for granite like background

i want to make to make a granite like background like http://www.tivli.com/ with a gradient at the center. i have found how to do gradient with both in the w3c tutorials, but are there any tutorials on how to make granite backgrounds in html5 canvas or svg? Thanks.
The site you referenced actually uses a simple 'noize.png' and then uses css3 radial gradients to buildup that background. I know you already knew that, I'm mentioning this for future readers.
Given this fact, I'll assume in the rest of my answer you want to learn, not a copy-pasta solution.
I've given up on svg looong time ago. But in canvas it's easy and fun... (especially now flash is FINALLY officially dead. Hurray).
So as others have already suggested in the comments to your question, why not use a seamless noise image? (you know where to find one :P).
You could still embed this image as 'DATA' in the html(, HINT: even or even feed image-data straight into canvas that will render it as your 'noise.php').
But then.. you are right: what if you wanted to change the noize-size?
And you want to know how to make granite/noise anyway..
And is mathematically/programmatically describing this noise lower in character-count (file-size) than supplying a ready-made image(-fragment)?
Start UPDATE 2 part 1:
Actually, after some good night sleep I realized/remembered that visual noise is one of the BEST way's to determine randomness. Humans are notoriously good at finding visual patterns, even professionals use this (and as such this is also heavily used in cryptography where one would need -for instance- a useful one time pad).
Also see 'commander' Crockford's YUI-lecture 'Principles of Security' from 19m07s to 22m37s.
Now why is this important? Well ECMA-script (aka javascript) defines a loose Math.random() function:
"returns a number value with positive sign, greater than or equal to 0 but less than 1, chosen randomly or pseudo randomly with
approximately uniform distribution over that range, using an
implementation-dependent algorithm or strategy"
Re-read the italic/bold part and welcome yourself to reality: each and every browser (brand/version) has it's own random-routine!!
"But what does it mean?" Well.. simply put.. Depending on browser(version)'s ES-Script implementation (cough cough IE): Noise based on Math.random() will/might render visible patterns in your noise (independently of possible tile-size)!!
So for the rest of this answer we are going to assume either an ideal world where browsers spit-out proper random numbers, or that you took control and use a stronger 'predictable' random-solution as is discussed on this wonderful article that google's bubble accidentally leaked :)
End Update 2 part 1
So let's start with the radial gradient-part. You already figured that one out.
Ok, then follows the noise-function in canvas (you could you could do this before the radial gradient, but this order gives a nicer grain and diffuses color banding the gradient produces -on a average lcd you would see them anyway since they're not true color-) : this is done by generating random pixels.
There would be a lot of different algorithms to use, I've used a straight-forward one that you can understand without math..
Note that generating noise for a modern day full-screen resolution is easily larger than 1 mega-pixel in resolution, so this would be slow! To overcome this we need to generate and RE-USE a small seamless tile. We use this as a pattern-fill in our full-size image that already has the radial gradient.
I also assume you want the radial gradient liquidly placed in the middle of the view-port, so if you want to go the fixed way (as opposed to the noize.png/css3 way you referenced), you'll also need an extra eventhandler 'onResize()' to have canvas render a new background.
Why? Well if you where to let the browser scale this background-image (created upon page-load) automatically, then the nice grain-size of your noise would change to, EVEN leading to visible PATTERNS that you would not want..
(Since I desperately want to go to sleep now..): The rest is thoroughly explained in the source-code of the function I wrote for you..
Here is the link to the fully documented code I wrote for you: jsfiddle.net/sU74C/ and here you can see it in full-screen preview.   UPDATE 1: function genNoise 80% FASTER!!
Use it if you like (retaining the link to this answer) or learn from it and do your own thing.
PLEASE DON'T FORGET to accept AN answer to this question (hopefully mine :))
Hope this helps!
UPDATE 2 part 2:
There are more way's to interact with canvas. One could also calculate/(re-)use/generate/save/import pixel-maps/array's (as png or base64 or jpg or ...) for instance, see this excellent article on faster 8bit and even faster 32bit (if the browser supports 'Uint8ClampedArray' as the type of the data property of the ImageData object) pixel-array's, including a proper solution to account for the endianness of the processor!!
So after giving this some considerable thought, it turns out that to do this 'right' is actually a challenge and should be divided in 2 parts:
Where do I get my noise-data (Math.random() or custom random or pre-defined external (image, json-string, random.com) or embedded (packed?) data)?
What is the fastest way to build/store/re-use this noise on full-screen size/canvas.
Given the statements in part 1 of this update and that we don't want patterns in our visible noise, I'm starting to lean to using some pre-rendered 'random' noise data (meant to tile seamlessly) that is embedded in the noise-generator: otherwise there is the overhead of running your own none-engine-optimized random function (times..a lot..).
Also I think one might get away with just black and white and transparency afterwards.. This might considerably speed-up things up AND reduce embedded pixel-data.
Think about it: black or white equals 0 or 1..
In base 64 one character can represent 6 bits. So a 30x30px image has 900 px divided by 6 bits = 150 characters (sweet-spot increments by 6px, so next is 36px*36px is 216 characters).

downscale large 1 bit tiff to 8 bit grayscale / 24 bit

Let's say i have a 100000x100000 1 bit (K channel) tiff with a dpi of 2000 and i want to downscale this to a dpi of 200. My resulting image would be 10000x10000 image. Does this mean that every 10 bits in the 1 bit image correspond to 1 pixel in the new image? By the way, i am using libtiff and reading the 1 bit tiff with tiffreadscanline. Thanks!
That means every 100 bits in the 1 bit image correspond to 1 pixel in the new image. You'd need to average the value over 10x10 1bit pixel area. For smoother greyscales, you'd better average over n bits where n is the bit depth of your target pixel, overlying the calculated area partially with neighbor areas (16x16px squares 10x10px apart, so their borders overlay, for a smooth 8-bit grayscale.)
It is important to understand why you want to downscale (because of output medium or because of file size?). As SF pointed out, colors/grayscale are somewhat interchangeable with resolution. If it is only about file size losless/lossy compression is also worth to look at..
The other thing is to understand a bit of the characteristics of your source image. For instance, if the source image is rasterized (as for newspaper images) you may get akward patterns because the dot-matrix is messed up. I have once tried to restore an old news-paper image, and I found it a lot of work. I ended up converting it to gray scale first before enhancing the image.
I suggest to experiment a bit with VIPS or Irfanview to find the best results (i.e. what is the effect of a certain resampling algorithm on your image quality). The reason for these programs (over i.e. Photoshop) is that you can experiment with GUI/command line while being aware of name/parameters of the algorithms behind it. With VIPS you can control most if not all parameters.
[Edit]
TiffDump (supplied with LibTiff binaries) is a valuable source of information. It will tell you about byte ordering etc. What I did was to start with a known image. For instance, LibTIFF.NET comes with many test images, including b&w (some with 0=black, some with 1=black). [/Edit]