What is the default quality for HTML5 Canvas.toDataURL? - html

According to mozilla, the second parameter for canvas.toDataURL(1,2) is:
If the requested type is image/jpeg or image/webp, then the second
argument, if it is between 0.0 and 1.0, is treated as indicating image
quality; if the second argument is anything else, the default value
for image quality is used. Other arguments are ignored.
But I can't find anywhere that tells me what the default value actually is.

According to the spec, it alludes to the default being browser dependant:
The second argument, if it is a number in the range 0.0 to 1.0 inclusive, must be treated as the desired quality level. If it is not a number or is outside that range, the user agent must use its default value, as if the argument had been omitted.
Edit: According to one user the default for Firefox is 0.92.
You can specify the JPEG quality as the second parameter to the toDataURL function. The default quality in Firefox is 0.92 (92%).
And according to this webkit bug report Chrome uses the same.
...Adds a libjpeg-based image encoder for Skia bitmaps. Default encoding quality
is 92 to match Mozilla...

Tested saving canvas content with .webp in chrome and edge with same result showing the quality parameter default value to be 0.8.
Here are results with quality on the left:
default -> 313.65 kB
1 -> 8.29 MB
0.9 -> 0.98 MB
0.8 -> 313.65 kB
0.7 -> 200.63 kB
0.6 -> 160.19 kB
0.5 -> 130.57 kB
0.4 -> 109.31 kB
0.3 -> 91.17 kB
0.2 -> 75.09 kB
0.1 -> 67.78 kB
0 -> 48.71 kB
This makes me think that quality parameter may be optimized for biggest decrease in weight and highest image quality and can be different for other image types.

Related

Tiff versus BigTiff

Please let me know if there is another Stack Exchange community this question would be better suited for.
I am trying to understand the basic differences between Tiff and BigTiff. I have looked on various sites and the only difference that is mentioned is that BigTiff uses 64-bit offsets while Tiff uses 32-bit offsets. That being said, you would need to know which of the two types you are reading. How is this done? According to https://www.leadtools.com/help/leadtools/v19/main/api/tifffmt.html, this is done by reading a file flag. However, the flag they are referring to appears to be unique to their own reader as I cannot find a corresponding data field in the specifications as shown by http://www.fileformat.info/format/tiff/egff.htm. What am I missing? Does BigTiff use a different file header than Tiff?
Everything you need to know is described in the BigTIFF link posted by #cgohlke. This is just to provide an answer to your question:
Yes, it uses a different file header.
Normal TIFF uses the following header:
2 byte byte order mark, "II" for "Intel"/little endian, or "MM" for "Motorola"/big endian.
The (version) number 42* as a 16 bit value, in the endianness given.
Unsigned 32 bit offset to IFD0
BigTIFF uses a slightly different header:
2 byte byte order mark as above
The (version) number 43 as a 16 bit value, in the endianness given.
Byte size of offset as a 16 bit value, always 8 for BigTIFF
2 byte padding, always 0 for BigTIFF
Unsigned 64 bit offset to IFD0
*) The value 42 was chosen for its "deep philosophical significance". Or according to the official specification, "[a]n arbitrary but carefully chosen number"...

Why doesn't Google calculate data quantity conversions using binary, and instead just moves the decimal point left/right?

I already understand the fundamentals behind why the two calculations are different. I just want to know how can I get Google to give me the same binary conversion result that Bing does, because I don't feel like using Bing just to convert data quantities.
Use MiB and KiB when you want the 1024 version as from the Kilobyte wikipedia entry: "In the International System of Quantities, the kilobyte (symbol kB) is 1000 bytes, while the kibibyte (symbol KiB) is 1024 bytes.
1 MiB to KiB

Really 1 KB (KiloByte) equals 1024 bytes?

Until now I believed that 1024 bytes equals 1 KB (kilobyte) but I was reading on the internet about decimal and binary system.
So, actually 1024 bytes = 1 KB would be the correct way to define or simply there is a general confusion?
What you are seeing is a marketing stunt.
Since non-technical people don't know the difference between Metric Meg, Gig, etc. against the binary Meg, Gig, etc. marketers for storage will use the Metric calculation, thus 1000 Bytes == 1 KiloByte.
This can cause issues with development or highly technical people so you get the idea of a binary Meg, Gig, etc. which is designated with a bi instead of the standard combination (ex. Mebibyte vs Megabyte, or Gibibyte vs Gigabyte)
There are two ways to represent big numbers: You could either display them in multiples of 1000 (base 10) or 1024 (base 2). If you divide by 1000, you probably use the SI prefix names, if you divide by 1024, you probably use the IEC prefix names. The problem starts with dividing by 1024. Many applications use the SI prefix names for it and some use the IEC prefix names. But it is important how it is written:
Using IEC standard:
1 KiB = 1,024 bytes (Note: big K)
1 MiB = 1,024 KiB = 1,048,576 bytes
Using SI standard:
1 kB = 1,000 bytes (Note: small k)
1 MB = 1,000 kB = 1,000,000 bytes
Source: ubunty units policy: https://wiki.ubuntu.com/UnitsPolicy
In the normal world, most things go by the power of 10. This would include electricity, for example.
But, in the computer world, it is about half binary. For example, when they sell a hard drive, they sell it by the value of 10, so if it is a 1KB drive, then it is 1000 B. But, when the computer reads it, the OS's usually read by the value of 1024. This is why, when you read the size of space available on a drive, it reads much less then what it was advertised. A 500 GB drive will read only about 466GB, because the computer is reading the drive by the binary 1024 version. Not the power of 10 that it was sold and advertised by. Same will go with flash drives. But, RAM is sold, and read by the computer, by the Binary 1024 version.
One thing to note.. It is "B", not "b". There are 8 bits "b" in a Byte "B". The reason I bring this up is when you get internet service, they usually advertise the speed by bits, not bytes. When it reads in the download box on the computer, it reads the speed in bytes. Say you have a 50Mb internet connection, it is actually 6.25MB connection in the download speed box, because you have to divide the 50 by 8 since there are 8 bits in a byte. That is how the computer reads it. Another marking strategy too. After all, 50Mb sounds much faster then 6.25MB. Other then speeds through a network, most things are read by bytes "B". Some people do not realize that there is a difference between the "B" and "b".
Quite simple...
The word 'Byte' is a computing reference for which the letter 'B' is used as abbreviation.
It must follow then that any reference to Bytes, eg. KB, MB etc, must be based on the well known and widely accepted 1024 base.
Therefore 1KB must equal 1024 Bytes, 1MB must equal 1048576 Bytes (1024x1024) etc.
Any non-computing reference to Kilo/Mega etc. Is based on the decimal 1000 base, eg. 1KW or 1KiloWatt which is 1000 Watts.

AS3 - Why does setting a DisplayObject's alpha to 0.7 actually results in the alpha being set to 0.69921875?

At one point in my code I set a Sprite's alpha to 0.7
square.alpha = 0.7;
Later in my code, I check for this alpha in a conditional statement.
if (square.alpha == 0.7) {//do stuff}
I was scratching my head why it wasn't working until I did a trace(square.alpha); and instead of 0.7 I got 0.69921875.
This number (0.69921875) was the same for each instance of the sprite that I set to have an alpha of 0.7.
I did a few tests and it looks like the only values of alpha that return exactly the same value as I set them are 0, 0.5, and 1. Anything else seems to always return a number that is very very close to what I set it to, but not quite. For example 0.2 will give me 0.19921875
Why does this happen?
I use Flex to compile the code, not sure if that has any affect on this.
Alpha is stored as an 8 bit channel under the hood. The number is due to the float -> 8-bit int -> float conversion.
Here's the math:
256 * 0.7 = 179 (179.2 rounded) // converting from float to 8bit int
179 / 256 = 0.69921875 // converting from int back to float
It's not due to the limitation of floating point numbers as the others have suggested.
The reason that 0, 0.5, and 1 work correctly is that these are fractions that don't undergo any rounding when converted to an 8 bit int.
for example:
256 * 0.5 = 128 (no rounding necessary)
128 / 256 = 0.5
If you want a work around, you could set your alpha to a fraction of 256, and check it against the same fraction:
square.alpha = 179 / 256;
if (square.alpha == 179 / 256) {/*do stuff*/}
That's a general limitation of floating point numbers. Just as you can't express 1/3 in the decimal system exactly (0.33333333.... etc.), you can't express (decimal) 0.1 as a binary floating point number exactly (0.00011001100110011001100110011... etc.).
You can express decimal 0.5 as a binary float exactly (0.1), as well as 0.25 (0.01) and other fractions that have a power of 2 in the denominator. That's why you saw a correct result for 0.5, but not for the others.
This part of the Python documentation explains it pretty well.
It's not about alpha... It's about floating point arithmetic. The way computers stores floating point numbers, some numbers just tend be close to what you want to store. You can find more details in this question: Why am I seeing inexact floating-point results in ECMAScript / ActionScript 3?

output dimensions Mismatch CAFFE-SSD

While calculating the dimensions of the SSD Object Detection pipeline, we found that for the layer named "pool3", with parameters:
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
the input dimensions are 75x75x256 (WxHxC)
and according to the formula: ( Wout = ( Win − kernel + 2*padding )/stride +1), the output dimension for Width comes out to be, (75-2)/2 = 37.5
However, the paper shows the output size at this point as 38, same is the output of the following code for this network
net.blobs['pool3'].shape
the answer seems simple that Caffe framework 'ceils' it but referring to this post and this one as well, it should be 'flooring' and answer should be 37
So can anyone suggest how Caffe treats these non-integral output sizes ?
There's something called padding. When the output feature map is not a whole number, the input feature map is padded with 0's. That's a standard procedure though it may not be explicitly mentioned.