When we use RGBA in Html we use sth like this.
<div style="Background: rgba (x, x, x, 0.dd)">Some Content</div>
How many decimals can you go in the dd(opacity). Is it browser dependent? Or are its limits specified in HTML standards?
The specification says it is a <number> which is defined as:
zero or more digits followed by a dot (.) followed by one or more digits
So there is no limit specified in the CSS spec.
I'd be surprised if any human eye could distinguish beyond two decimal places though.
The value can be any number between 0.0 and 1.0.
The resolution depends on the color space resolution which typically is 8-bit (future may offer higher resolutions such as 10- and 12-bit, although I doubt that will happen anytime soon, but that is why a fraction is used instead of a byte value).
The value is multiplied with the byte value so it is limited what numbers you want to use and the final value is rounded to closest integer value:
Internal byte value = round(alpha * 255);
(or an increment of 1 / 256 = 0.00390625)
to give you actual change of the final byte value and the visual appearance (assuming solid background).
I made a small script here which gives you the result from using various numbers of decimals in the fraction value - as you can see when you are at 3 decimals the values start to be similar and therefor not so useful.
ONLINE GENERATED TABLES HERE
The loop to generate the table looks like this in general:
for (; i < 1; i += 0.1) {
num = Math.round(i * 255, 10);
...
}
The opacity property has a value set to two decimals.
All current browsers recognize this, the context is a little different for IE8 and below.
Related
I am trying to include a simple density calculation in access 2016, but the form returns a value of 0 if the input dimensions (mass or sphere diameter) are < 0.5. The field works fine for larger dimensions, so I assume that the smaller values are getting rounded to 0 somewhere along the way, but I can't figure out where.
For the inputs in my table, I have Field Names "green mass", "green pole", and "green equator" where the data type for each is set to "number," the Field Size is set to "single" (vs. double or decimal), and the Decimal Places is set to 4 digits
The resulting density is displayed in the Field "apparent green density" where the data type is set to "calculated," the Result Type is set to "single" and the Decimal Places is set to 4 digits.
After looking at various access forums and websites, I'm pretty sure I want to use single or double as my field size, but I've also tried decimal and byte and integer I keep getting 0.
Can anyone explain why this isn't working?
The equation is below. It's a bit complicated because it's a 3-part If statement (if dimensions for a sphere are given, caclulate density of a sphere, if dimensions of a disc are give, calculate density of a disc, if dimensions of a cube...) All three cases work for large dimensions (>0.5), but all 3 result in 0 for dimensions <0.5.
IIf([GreenPole],[GreenMass]/(3.14159265359/6*2.54^3*(([GreenPole]+[GreenEquator])/2)^3),IIf([GreenDia],([GreenMass]/(3.14159265359*([GreenDia]/2)^2*[GreenHeight]*2.54^3)),IIf([GreenLength],[GreenMass]/([GreenLength]*[GreenWidth]*[GreenThickness]*2.54^3),0)))
The first part of the equation for density of a sphere, is:
`IIf([GreenPole],[GreenMass]/(3.14159265359/6*2.54^3*(([GreenPole]+[GreenEquator])/2)^3),0)
Oliver Jacot-Descombes got me started in the right direction. I don't have much experience at all with coding, but I think what happened is that field identified in my IIf statement is somehow transformed into a boolean or yes/ no field and anything less than 0.5 is rounded to a no and the result of the truepart is then 0.
I modified the code to:
IIf([GreenPole]>0,[GreenMass]/(3.14159265359/6*2.54^3*(([GreenPole]+[GreenEquator])/2)^3),0)
And everything works now. (I also modified the second and third IIf statments to IIf([GreenLength]>0 and IIF([GreenDia]>0..)
I use the following CSS rule to set background color of div:
div {
background-color: rgba(96, 96, 96, .1);
}
In Google Chrome v.42 in 'Computed' tab of Developer Tools I see this result rgba(96, 96, 96, 0.0980392);. I think, it looks like some web-kit optimization...
In FireFox v.36 computed background color equals to rgba(96, 96, 96, 0.1)
I've made a simple http://jsfiddle.net/sergfry/c7Lzf5v2/ that shows it in action.
So, can I prevent opacity changing in Google Chrome?
Thanks!
As stated by Quentin, this is an IEEE floating point issue.
0.1 doesn't actually exist in decimal floating point technically simply due to the way that binary works.
0.1 is one-tenth, or 1/10. To show it in binary, divide binary 1 by binary 1010, using binary long division:
As you can see, 0.1 in binary is 0.0001100110011....0011 and it will keep repeating 0011 on the end to infinity.
Browsers will pick and choose the closest available point to 0.1 and use that as the opacity instead. Some will go over and some will go under.
FireFox i would guess it just showing the human readable version but in reality, its really using a computer usable floating point.
As an example:
body {
color: rgba(0,0,0,0.1); // actually 0.0980392
opacity: 0.1; // actually 0.100000001490116
}
Two completely different values for exactly the same floating point.
This floating point issue can actually be replicated elsewhere within browsers using other languages such as Javascript. Javascript numbers are always 64 bit floating point (which i believe CSS is as well). This is more commonly known as Double-precision floating point. PHP also uses double-precision floating points.
64 bit floating point numbers are as you could guess, stored in 64 bits, where the number (the fraction) is stored in bits 0 to 51, the exponent in bits 52 to 62, and the sign in bit 63.
This causes problems down the line as it means integers are only counted as accurate up to 15 decimal points and can really only calculate up to 17 decimal points.
This means that numbers can round up very easily or may just not be stored correctly.
var x = 999999999999999; // x = 999999999999999
var y = 9999999999999999; // y = 10000000000000000
The arithmetic for floating points can also be out of alignment by quite a lot in places as well. As I've shown above; 0.1 in decimal isn't actual 0.1 but 0.000110011... and so on. This means some basic maths can be completely wrong.
var x = 0.2 + 0.1; // x = 0.30000000000000004
You end up having to confuse the system to get the number you actually want. This can be done by * the number by 10 and then dividing it to get your actual wanted result.
var x = (0.2 * 10 + 0.1 * 10) / 10; // x = 0.3
Precision within computers floating point is very difficult and is even more difficult when there are multiple different implementations (or browsers) trying to do their best for speed and displaying the information they're given correctly.
There are quite a few different pieces of information regarding floating points and what the CSS processor (or JS as I expect may calculations will be the same) may be trying to achieve.
Exploring Binary - Why 0.1 does not exist
Javascript Numbers
Wikipedia - IEEE floating point
Wikipedia - Double-precision floating point
I've implemented some functions according to the HSL->RGB and HSV->RGB algorithms.
They mostly work fine, but I'm not sure what is the right thing to do then a color component overflows as a result of the conversion.
E.g., the red component ends up being 1.2 whereas the allowed range is [0..1]. If I multiply that by 255 I will obviously get a value that is invalid in the RGB world.
What is the correct way of handling this -- truncating (if > 1 then set to 1) or wrapping around (if > 1 then substract 1)?
It is not possible that the values R, G and B come out of their range if you have properly implemented standard algorithms and inputs are in their ranges.
What algorithms you've implemented?
I wish to create a 2-D plot with x-axis values: 0, 10^-2, 10^-1, 10^0, 10^1, 10^2.
I tried using semilog(x), but that does not work because the 0-value gets dropped (understandably).
So instead I am using xticklabels
datalabels = {'0', '10^-2', '10^-1', '10^0', '10^1', '10^2'};
data = [1, 2, 3, 4, 5, 6];
plot(data);
set(gca(),"xticklabel", datalabels);
This is working fine, except for one small nit:
The x-axis labels get displayed differently, depending on whether the exponent is positive or negative. Positive exponents are displayed as superscripts. Negative exponents are not. For example, '10^-2' is displayed as '10-2', with '-2' sitting on the same baseline as '10'.
Anyone know how to enforce consistency, so all the exponents are displayed as superscripts?
UPDATE: I created a legend with a mixture of negative and positive exponents, and it looks really ugly. I now see that, in addition to inconsistently displaying the exponent as a superscript, Octave uses different fontsizes, depending on whether the exponent is negative or positive.
Have you tried '10^{-2}'
From this reference:
Finally, the superscript and subscripting can be controlled with the
'^' and '' characters. If the '^' or '' is followed by a {
character, then all of the block surrounded by the { } pair is super-
or sub-scripted. Without the { } pair, only the character immediately
following the '^' or '_' is super- or sub-scripted.
I'm currently making a color picker (pretty standard one, pretty much the same as photoshop with less options at the moment: still in early stage). Here's the picture of the actual thing : http://i.stack.imgur.com/oEvJW.jpg
The problem is : to retrieve the color of the pixel that is under the color selector (the small one, the other is the mouse), I have this line that I thought would do it :
_currentColor = Convert.hsbToHex(new HSB(0,
((_colorSelector.x + _colorSelector.width/2)*100)/_largeur,
((_colorSelector.y + _colorSelector.height/2)*100)/_hauteur
));
Just to clarify the code, I simply use the coordinates of the selector in order to create a new HSB Color (saturation is represented on the X axis and brightness (value) on the Y axis of such a color picker). I then convert this HSB Color to Hexadecimal and assign it to a property. The hue is always set to 0 at the moment but this is irrelevant as I only work with pure red to test.
It partially does what I wanted, but the returned color values are inversed for most of the corners:
for (0,0) it's supposed to return 0xFFFFFF, but it returns 0x000000 instead
for (256, 0) it's supposed to return 0xFF0000, but it returns 0x000000 instead
for (0, 256) it's supposed to return 0x000000, but it returns 0xFFFFFF instead
for (256, 256) it's supposed to return 0x000000, but it returns 0xFF0000 instead
I tried many variations in my code, but I just can't seem to fix it properly. Any reply/suggestions are more than welcomed!
I think the error (or one of them) is using values in the range 0..256 which seems to lead to overflows, try to use 0..255 instead.
Just swap the X and Y axis and it's solved.
Assuming the registration point is centered, which seems to be the case since you're doing:
(_colorSelector.x + _colorSelector.width/2)
I think you formula should look something like this:
(_colorSelector.x + _colorSelector.width/2) / _colorSelector.width
If your registration point is at (0,0), it should be just:
(_colorSelector.x / _colorSelector.width);
The above should give you a number in the range 0...1
Also, you should invert this value for brightness (because a low y value represents a high brightness and a high y value, low brightness; so brightness decreases along the y axis, while saturation increases along the x axis). So for your y axis you should do:
1 - ((_colorSelector.y + _colorSelector.height/2) / _colorSelector.height)
(Again, assuming the registration point is centered).
If your conversion function expects percentages, then you should multiply by 100
(_colorSelector.x + _colorSelector.width/2) / _colorSelector.width * 100
(1 - ((_colorSelector.y + _colorSelector.height/2) / _colorSelector.height)) * 100
Maybe I'm missing something, though. I'm not sure where _largeur and _hauteur come from, but it looks like these are width and height. I think you should use the _colorSelector height and width, but I could be wrong.
PS: I hope you get the idea, because I haven't compiled the above code and maybe I screwed up some parenthesis or made some other dumb mistake.