Why does Google Chrome change background opacity? - html

I use the following CSS rule to set background color of div:
div {
background-color: rgba(96, 96, 96, .1);
}
In Google Chrome v.42 in 'Computed' tab of Developer Tools I see this result rgba(96, 96, 96, 0.0980392);. I think, it looks like some web-kit optimization...
In FireFox v.36 computed background color equals to rgba(96, 96, 96, 0.1)
I've made a simple http://jsfiddle.net/sergfry/c7Lzf5v2/ that shows it in action.
So, can I prevent opacity changing in Google Chrome?
Thanks!

As stated by Quentin, this is an IEEE floating point issue.
0.1 doesn't actually exist in decimal floating point technically simply due to the way that binary works.
0.1 is one-tenth, or 1/10. To show it in binary, divide binary 1 by binary 1010, using binary long division:
As you can see, 0.1 in binary is 0.0001100110011....0011 and it will keep repeating 0011 on the end to infinity.
Browsers will pick and choose the closest available point to 0.1 and use that as the opacity instead. Some will go over and some will go under.
FireFox i would guess it just showing the human readable version but in reality, its really using a computer usable floating point.
As an example:
body {
color: rgba(0,0,0,0.1); // actually 0.0980392
opacity: 0.1; // actually 0.100000001490116
}
Two completely different values for exactly the same floating point.
This floating point issue can actually be replicated elsewhere within browsers using other languages such as Javascript. Javascript numbers are always 64 bit floating point (which i believe CSS is as well). This is more commonly known as Double-precision floating point. PHP also uses double-precision floating points.
64 bit floating point numbers are as you could guess, stored in 64 bits, where the number (the fraction) is stored in bits 0 to 51, the exponent in bits 52 to 62, and the sign in bit 63.
This causes problems down the line as it means integers are only counted as accurate up to 15 decimal points and can really only calculate up to 17 decimal points.
This means that numbers can round up very easily or may just not be stored correctly.
var x = 999999999999999; // x = 999999999999999
var y = 9999999999999999; // y = 10000000000000000
The arithmetic for floating points can also be out of alignment by quite a lot in places as well. As I've shown above; 0.1 in decimal isn't actual 0.1 but 0.000110011... and so on. This means some basic maths can be completely wrong.
var x = 0.2 + 0.1; // x = 0.30000000000000004
You end up having to confuse the system to get the number you actually want. This can be done by * the number by 10 and then dividing it to get your actual wanted result.
var x = (0.2 * 10 + 0.1 * 10) / 10; // x = 0.3
Precision within computers floating point is very difficult and is even more difficult when there are multiple different implementations (or browsers) trying to do their best for speed and displaying the information they're given correctly.
There are quite a few different pieces of information regarding floating points and what the CSS processor (or JS as I expect may calculations will be the same) may be trying to achieve.
Exploring Binary - Why 0.1 does not exist
Javascript Numbers
Wikipedia - IEEE floating point
Wikipedia - Double-precision floating point

Related

Preferred value to encode 96 DPI within PNG

PNG files may contain chunks of optional informations. One of these optional information blocks is the physical resolution of the image (chunk-signature pHYs).[1] [2] It contains separate values for horizontal and vertical resolution as pixels per unit, and a unit specifier, that can be 0 for unit unspecified, or 1 for meter ← that's quite confusing, because resolutions are traditionally expressed in DPIs.
The Inch is defined as 25.4 mm in the metric system.
So, if I calculate this correctly, 96 DPIs means 3779.527559... dots per metre. For the pHYs chunk, this has to be rounded. I'd say 3780 is the right value, but I found also 3779 suggested on the web. Images of both kind also coexist on my machine.
The difference may not be important in most cases,
3779 * 0.054 = 95.9866
3780 * 0.054 = 96.012
but I try to avoid tricky layout problems when mixing images of both kind in processes that are DPI-aware like creating PDF files using LaTeX.
[1] Portable Network Graphics (PNG) Specification (Second Edition), section11.3.5.3 pHYs Physical pixel dimensions
[2] PNG Specification: Chunk Specifications, section 4.2.4.2. pHYs Physical pixel dimensions
The relative difference is less that 0.03% (2.65/10000), it's hardly relevant.
Anyway, I'd go with 3780. Not only it's the nearest value, but it would give the correct value if some (sloppy) conversor rounds the value down (instead of rounding to the nearest).
Also, if you google "72.009 DPI PNG" you'll see a similar (non) issue with 72 DPI (example), and it seems that most people rounded the value up (which is also the nearest) 2834.645 -> 2835

Html rgba color opacity?

When we use RGBA in Html we use sth like this.
<div style="Background: rgba (x, x, x, 0.dd)">Some Content</div>
How many decimals can you go in the dd(opacity). Is it browser dependent? Or are its limits specified in HTML standards?
The specification says it is a <number> which is defined as:
zero or more digits followed by a dot (.) followed by one or more digits
So there is no limit specified in the CSS spec.
I'd be surprised if any human eye could distinguish beyond two decimal places though.
The value can be any number between 0.0 and 1.0.
The resolution depends on the color space resolution which typically is 8-bit (future may offer higher resolutions such as 10- and 12-bit, although I doubt that will happen anytime soon, but that is why a fraction is used instead of a byte value).
The value is multiplied with the byte value so it is limited what numbers you want to use and the final value is rounded to closest integer value:
Internal byte value = round(alpha * 255);
(or an increment of 1 / 256 = 0.00390625)
to give you actual change of the final byte value and the visual appearance (assuming solid background).
I made a small script here which gives you the result from using various numbers of decimals in the fraction value - as you can see when you are at 3 decimals the values start to be similar and therefor not so useful.
ONLINE GENERATED TABLES HERE
The loop to generate the table looks like this in general:
for (; i < 1; i += 0.1) {
num = Math.round(i * 255, 10);
...
}
The opacity property has a value set to two decimals.
All current browsers recognize this, the context is a little different for IE8 and below.

Text size in standart printable points

How can I set the text size (inside TextField) in standart CSS/printable points? According to the manual:
fontSize - Only the numeric part of the value is used. Units (px, pt)
are not parsed; pixels and points are equivalent.
As far as I understand, 1 pixel may be equal to 1 point only in 72 PPI case. So, actionscript just operating pixels (not the real points). My trouble is to get the actual text size that I can print. Any advices or solutions are welcome.
SWF is measured in pixels, moreover, is scalable, so 1 pixel can be 1 point now, 2 points a bit later (scaleY=scaleX=2), and an undefined number another bit later (removed from stage without dereferencing). In short, for AS there are NO "real points" since it does not know a thing about printers, while it knows about displays.

Converting between RGB and HSL/HSV: What to do with overflows?

I've implemented some functions according to the HSL->RGB and HSV->RGB algorithms.
They mostly work fine, but I'm not sure what is the right thing to do then a color component overflows as a result of the conversion.
E.g., the red component ends up being 1.2 whereas the allowed range is [0..1]. If I multiply that by 255 I will obviously get a value that is invalid in the RGB world.
What is the correct way of handling this -- truncating (if > 1 then set to 1) or wrapping around (if > 1 then substract 1)?
It is not possible that the values ​​R, G and B come out of their range if you have properly implemented standard algorithms and inputs are in their ranges.
What algorithms you've implemented?

How to divide tiny double precision numbers correctly without precision errors?

I'm trying to diagnose and fix a bug which boils down to X/Y yielding an unstable result when X and Y are small:
In this case, both cx and patharea increase smoothly. Their ratio is a smooth asymptote at high numbers, but erratic for "small" numbers. The obvious first thought is that we're reaching the limit of floating point accuracy, but the actual numbers themselves are nowhere near it. ActionScript "Number" types are IEE 754 double-precision floats, so should have 15 decimal digits of precision (if I read it right).
Some typical values of the denominator (patharea):
0.0000000002119123
0.0000000002137313
0.0000000002137313
0.0000000002155502
0.0000000002182787
0.0000000002200977
0.0000000002210072
And the numerator (cx):
0.0000000922932995
0.0000000930474444
0.0000000930582124
0.0000000938123574
0.0000000950458711
0.0000000958000159
0.0000000962901528
0.0000000970442977
0.0000000977984426
Each of these increases monotonically, but the ratio is chaotic as seen above.
At larger numbers it settles down to a smooth hyperbola.
So, my question: what's the correct way to deal with very small numbers when you need to divide one by another?
I thought of multiplying numerator and/or denominator by 1000 in advance, but couldn't quite work it out.
The actual code in question is the recalculate() function here. It computes the centroid of a polygon, but when the polygon is tiny, the centroid jumps erratically around the place, and can end up a long distance from the polygon. The data series above are the result of moving one node of the polygon in a consistent direction (by hand, which is why it's not perfectly smooth).
This is Adobe Flex 4.5.
I believe the problem most likely is caused by the following line in your code:
sc = (lx*latp-lon*ly)*paint.map.scalefactor;
If your polygon is very small, then lx and lon are almost the same, as are ly and latp. They are both very large compared to the result, so you are subtracting two numbers that are almost equal.
To get around this, we can make use of the fact that:
x1*y2-x2*y1 = (x2+(x1-x2))*y2 - x2*(y2+(y1-y2))
= x2*y2 + (x1-x2)*y2 - x2*y2 - x2*(y2-y1)
= (x1-x2)*y2 - x2*(y2-y1)
So, try this:
dlon = lx - lon
dlat = ly - latp
sc = (dlon*latp-lon*dlat)*paint.map.scalefactor;
The value is mathematically the same, but the terms are an order of magnitude smaller, so the error should be an order of magnitude smaller as well.
Jeffrey Sax has correctly identified the basic issue - loss of precision from combining terms that are (much) larger than the final result.
The suggested rewriting eliminates part of the problem - apparently sufficient for the actual case, given the happy response.
You may find, however, that if the polygon becomes again (much) smaller and/or farther away from the origin, inaccuracy will show up again. In the rewritten formula the terms are still quite a bit larger than their difference.
Furthermore, there's another 'combining-large&comparable-numbers-with-different-signs'-issue in the algorithm. The various 'sc' values in subsequent cycles of the iteration over the edges of the polygon effectively combine into a final number that is (much) smaller than the individual sc(i) are. (if you have a convex polygon you will find that there is one contiguous sequence of positive values, and one contiguous sequence of negative values, in non-convex polygons the negatives and positives may be intertwined).
What the algorithm is doing, effectively, is computing the area of the polygon by adding areas of triangles spanned by the edges and the origin, where some of the terms are negative (whenever an edge is traversed clockwise, viewing it from the origin) and some positive (anti-clockwise walk over the edge).
You get rid of ALL the loss-of-precision issues by defining the origin at one of the polygon's corners, say (lx,ly) and then adding the triangle-surfaces spanned by the edges and that corner (so: transforming lon to (lon-lx) and latp to (latp-ly) - with the additional bonus that you need to process two triangles less, because obviously the edges that link to the chosen origin-corner yield zero surfaces.
For the area-part that's all. For the centroid-part, you will of course have to "transform back" the result to the original frame, i.e. adding (lx,ly) at the end.