Producing many different hashes of a jpg file with minimal change to picture - binary

My goal is to write a program (e.g. in Python or C++) which takes as input a JPG file (e.g. tux.jpg) and make tiny changes to it, such that it outputs many different images (maybe a thousand images or even more), but in a way that all these images, while having different hash, look almost the same visually, i.e. the changes should have the least impact to the original image as possible.
I first though to play around with the jpg header but that might not be enough to make the many thousands of different pictures I want.
As a naive way, I thought to flip a random bit in the file, but that bit can possibly result in a less than desirable result, which can be seen especially in small pictures (e.g. a dark pixel in the white space in the tux picture). Ideally, I would like to change a random pixel with a "neighboring" color, such that the two resulting pictures have almost no visual difference.
For this purpose, I read the JPG codec example but I find it very confusing and hard to understand. Can someone help me what my program should look for as it parses the file in binary format and how to change a random pixel with a "neighboring" color?

You can change the comment part of the file by playing with the file header. A simple way to do that is to use a ready made open source program that allows you to put the comment of your choice, example HLLO repeated 8 times. That should give you 256 bits to play with. You can then determine the place where the HLLO pattern is located in the file using a hex editor. You then load the data in memory and start changing these 32 bytes and calculate the hash each time to get a collision (a hash that matches)
By the time you find a collision, the universe will have ended.
Although in theory doable, it's practically impossible to crack SHA256 in a reasonable amount of time, standard encryption protocols would be over and hackers would be enjoying their time.

Related

Is there a "law of diminishing returns" for converting images to Base64 as opposed to simply using the images themselves?

Say I have some icons on my site (32x32, 64x64, 128x128, etc.), then converting them to Base64 makes sense, right?
Now say I have some high resolution images that are 2MB, 3MB, 4MB, and greater that I am using on my site. Does it make sense to convert these larger images to Base64 or is it "smarter" to simply keep using them as .jpg/.png/.gif/etc.?
If there is such a "law", "rule of thumb", etc. what is the line in the sand?
EDIT: While this post was marked as a duplicate, the linked "original" is from over 10 years ago; browser technology, computers, and the web itself, has changed significantly since then. What would be the answer for today's technology?
The answer to the questions is yes, and it depends.
If we rephrase the question to: Does the law of diminishing returns apply to using base64 for embedding images in a page?
a) Yes, the law applies
b) It depends on: Image count and size, and your setup (ie HTTP (HTTP/2?) connection type, etc)
The reason being that more images require more connections imply more handshakes, unless you are using keep alive connections or HTTP/2 streaming. If the images are bigger and require more computing to convert from base64 back to binary (plus decompression), then the bandwidth saves come with CPU expense.
In general, if you have lots of images (icons, for example), you could embed as base64. But in that case you also have the following options:
a) Image Atlas: Converting all small images to a single image (one load) and showing only the portion that you need through the page.
b) Converting to alternative formats, such as fonts or SVG, and again rendering what you need. Example: Open Iconic.

Tesseract on very specific large amout of similar images

I have a huge dataset of images from which i want to read the text. These data is always in the same form on this images: There are two temperature values, and two velocity value. Here are some examples:
the biggest problem i think is that the text is slightly transparent.
I tried to do it with tesseract (pytesseract and tesseract.js) but the results are not really good. Somethings the temperature values are interpreted correct but the velocity values are rarely correct. Especially the point isn't found.
Is there any posiibility to optimize the predictions of tesseract by telling it the pattern of my text, because it ist always the same in every image.
What i already did is congif the whitelist to
tessedit_char_whitelist =
Do you maybe have any other idea maybe how to preprocces this images best to get better results. I Already tried to increase the contrast. This resulted in a small improvement, but still not particularly good.
Of course i'm also open to any other ocr libraries and programming languages if you think they would work better

What alternative I can use to SVG words cloud?

Recently, I've designed a word cloud in Illustrator for a customer. It uses around 5,000 people's names in white on a colored background on a logo path, and includes a few vector logos. Each name is ridiculously small, and we want to be able to search on the cloud and find our name.
We've put it online as a SVG with success - but a 20M file can cause problems!
So everything would be fine until we reach 10,000 visitors at the same time, and make all our servers crash and timeout everyone.
So what is our alternative to make this fast, easy for visitors to use, and latency free? We think about Canvas, but not sure if it's simple to make a words cloud with [really (thing about following a logo path)] custom shape.
It sounds like you have 20Mb because the names are being stored/represented with paths. If you represent them as text, you will substantially reduce the size of the file, AND make it appropriately searchable.
Assuming 13 characters per name (including the space in between), UTF-8 encoding, and 10,000 names the names themselves should only take 127Kb. You may wish to experiment with transmitting the background of the SVG and the names (JSON?), and using a script to construct the cloud in the browser.
Edit: Even if you create a completely static SVG, representing the text as text will result in a substantial saving of space over the use of paths.

Is there a trick to creating an animated gif of tv static that will allow it to be relatively small?

Apologies in advance, but this isn't really a photoshop question. Rather, I'm trying to come up with something that is convincing but exploits the compression and features of the gif format as best as possible to produce the smallest possible file for the animation.
Some constraints:
It needs to be at least 20 or 30 frames. I've tried with fewer (and since they're largely uncompressable 15 frames is half the size of 30, generally speaking)
Size needs to be no less than about 256x192
It doesn't need to be color though, nor even full grayscale. I've seen convincing stills with as few as about 16 grays
It can have a pattern, but not one that is instantly obvious to the human eye. If someone takes a single frame and after a minute or two can spot the pattern (which makes it compressable?) that's ok
Frames 2 through n can use quite a bit of alpha, but when I started using big horizontal stripes of alpha, it was instantly noticeable to my eyes. So you don't get to rack up a bunch of RLE with the easy cheat.
All of the above and still needs to look good at 30-33ms frame speed. No variable speed or relying on anything significantly faster than that.
Also acceptable: an apng that complies with the above constraints. Possibly even mpeg, if you can come up with that (I'm ignorant of how the DCT does its magic).
Ideally I could get something down in the 250kbyte range, but I'd settle for anything significantly smaller than the 9 meg monstrosity I cooked up last week.
Oh, and one last thing: obviously I don't expect anyone to supply the graphic for me. I'm just looking for some trick(s) that will let me get there myself eventually.
This is a very interesting question.
Static (random noise) by its nature is actually highly incompressible. Information theory says that true noise is basically incompressible, and the more patterns something contains the more compressible it becomes (to the point of a solid line of 1's or 0's being perfectly compressible.
The ideal would be to create a true noise generator (just random numbers), but that doesn't help within the constraints of your problem.
The best thing I can think of is storing a number of small tiles of static and displaying them in staggered fashion to prevent the eye catching on to any patterns. Aside from that, you won't have much luck compressing this beyond 256 x 192 x 20 / 2 or about 500 kilobytes ( assuming 20 frames with resolution of 256 x 192, using 4 bit color depth ).
Simply encoding your animated gif in 16 color mode should get you to that point.
Well old but still unanswered answer (not checked anyway)
so create the NoSignal image data
If it is not obvious how read this:
NoSignal in asm and C++
encode into gif
Had played with it a bit so I used resolution 320x240, the lowest bit resolution usable is 3 bit per pixel. Lower does not look good. Single global palette only (obvious) here 300KB example
[Notes]
if this is just for some app then generate the image on the run it is really just few lines of code see that linked answer in bullet #1
Yes, you can achieve that with a lossy GIF compression, or rather a specifically rigged compressor that outputs noisy LZW stream.
A best-case scenario for LZW compression is to output X pixels, then X+1 pixels, then X+2 pixels, etc. It's easy to make that noisy.
Try screwing up the gfc_lookup function to (almost) always return longest dictionary item and compress series of noisy frames with it:
https://github.com/pornel/giflossy/blob/master/src/gifwrite.c#L270
Not easily normally. Good randomness (high entropy) by definition does not compress well. Having it greyscale may help, but not much.
If you want to do this on a web page and you have (some) control, you can always write a very small bit of JS to help... if you can do this, then you can do the following:
Create a gif about 1.5x the size you need with high-entropy static.
Set the clipping to the size you want.
Then you randomly move it around by changing the starting offset.
As long as your offsets are a decent distance away from one another (and don't repeat patterns) it is usually difficult to discern it as movement, and it looks truly like static.
I did this trick about 20 years ago on an Amiga to emulate static on a limited-memory demo, and it worked remarkably well... it also does not require fast low-level code as all was done by changing offsets and the co-processor bitblit-ed the rest.

Extract or crop image from within TIFF

I need to extract/crop the logotype (BEAVER) in the middle from a TIFF file that looks like this: http://i41.tinypic.com/2i7rbie.jpg
And then I need to automate the process so it can be repeated about 9 million times...
My guess is that I would have to use some OCR software. But is it possible for such a software to "crop anything that starts below this point and ends above this point"?
Thoughts?
Typically OCR software does only extraction of text from images and conversion of it into some text-specific format. It does not do crop. However, you can use OCR technologies to achieve your task. I would recommend following:
OCR whole page
Get coordinates of recognized text
Apply your magic rules to recognized text to locate area to crop: such as averything in between "application filled" and "STATEMENT" sentences.
Cut from image that area and export it where you want it.
Real challenge is in the amount of text you would like to process. You have to be very carefull when defining your "smart rules" to make sure they don't provide false positives and always send suspicious images to separate queue that you will later manually review and update your rules.
In general it may look like this:
Take first 10 of images, define logo detection rules, test and see if everything works well
Then run on next 10, see what was prcessed wrong, what was not processed, update rules, re-process those 10 to make sure everything works well now
Re-run it on new batches of same size until it will start working well.
Then increase batch size from 10 to 100, and go with those batches until again everything start working smoothly
Then continue this way perfecting your rules and increasing batch size. At some point of time you will go to production speed.
Most likely you will encounter some strange images that either contradict existing rules, or just wrong. Not always you have to update your rules to accomodate it. It may happen that there it only dozen of images like that in whole your 9 million collection. It might be better to leave them in exceptions queue for manual processing, and don't risk stability of your magic rules.