I'm trying to implement robust / stable Laguerre's method. My code works for most of polynomials but for few it "fails". Respectively I don't know how to properly handle "corner" cases.
Following code tries to find single root (F64 - 64bit float, C64 - 64bit complex):
private static C64 GetSingle( C64 guess, int degree, C64 [] coeffs )
{
var i = 0;
var x = guess;
var n = (F64) degree;
while( true )
{
++Iters;
var v0 = PolyUtils.Evaluate( x, coeffs, degree );
if( v0.Abs() <= ACCURACY )
break;
var v1 = PolyUtils.EvaluateDeriv1( x, coeffs, degree );
var v2 = PolyUtils.EvaluateDeriv2( x, coeffs, degree );
var g = v1 / v0;
var gg = g * g;
var h = gg - ( v2 / v0 );
var f = C64.Sqrt(( n - 1.0 ) * ( n * h - gg ));
var d0 = g - f;
var d1 = g + f;
var dx = d0.Abs() >= d1.Abs() ? ( n / d0 ) : ( n / d1 );
x -= dx;
// if( dx.Abs() <= ACCURACY )
// break;
if( ++i == ITERS_PER_ROOT ) // even after trying all guesses we didn't converted to the root (within given accuracy)
break;
if(( i & ( ITERS_PER_GUESS - 1 )) == 0 ) // didn't converge yet --> restart with different guess
{
x = GUESSES[ i / ITERS_PER_GUESS ];
}
}
return x;
}
At the end if it didn't found root it tries different guess, first quess (if not specified) is always 'zero'.
For example for 'x^4 + x^3 + x + 1' it founds 1st root '-1'.
Deflates (divides) original poly by 'x + 1' so 2nd root search continues with polynomial 'x^3 + 1'.
Again it starts with 'zero' as initial guess... but now both 1st and 2nd derivates are 'zero' which leads to 'zero' in 'd0' and 'd1'... ending by division-by-zero (and NaNs in root).
Another such example is 'x^5 - 1' - while searching for 1st root we again ends with zero derivates.
Can someone tell me how to handle such situations?
Should I just try another guess if derivates are 'zero'? I saw many implementation on net but none
had such conditions so I don't know if I'm missing something.
Thank you
Here is a pic of what I basically want to achieve:
So as the title says, I want to merge long/lat points which they're radius (of 25Kms for example) touch inside a bounding box of long/lat points.
Here is my very simple DB structure:
+-------+-------------+------------+
| id | long | lat |
+-------+-------------+------------+
| 1 | -90.27137 | 50.00702 |
| 2 | -92.27137 | 52.00702 |
| 3 | -87.27137 | 48.00702 |
| 4 | -91.27137 | 51.00702 |
+-------+-------------+------------+
Here is my query so far:
set #bottom_lat = -100.27137;
set #bottom_lon = 40.00702;
set #top_lat = -80.27137 ;
set #top_lon = 60.00702 ;
;
SELECT AVG(latitude), AVG(longitude)
FROM destination
WHERE latitude > #bottom_lat AND longitude > #bottom_lon AND latitude < #top_lat AND longitude < #top_lon
So my query just merging all points inside an imaginary bounding box without considering radius.
I know that I would prorably have to work with the Haversine formula but I'm crap at maths and MySQL which make things a little bit difficult. Indeed, I could eventually merge points if I had just one radius but each points have its own radius and I'm struggling.
This is for a student project and any help will be much much apreciated.
References:
-My query on SQL Fiddle: http://sqlfiddle.com/#!2/3a42b/2
( contains a SQL Fiddle exemple for the Haversine Formula in comment )
-The Haversine formula in MySQL query: (work for checking all points inside a given radius)
SELECT*, ( 6371* acos( cos( radians(
#my_lat) ) * cos( radians(
destination.latitude ) ) * cos( radians(
destination.longitude ) - radians(
#my_lon) ) + sin( radians(
#my_lat) ) * sin( radians(
destination.latitude ) ) ) ) AS distance
FROM destination
ORDER BY distance limit 1
;
This operation may be too complicated to execute without the aid of PHP or another programming language. Here's how you could do it in PHP:
<?
$link = mysqli_connect("host", "user", "pass", "database");
// Grab all the points from the db and push them into an array
$sql = "SELECT * FROM data";
$res = $link->query($sql);
$arr = array();
for($i = 0; $i < mysqli_num_rows($res); $i++){
array_push($arr, mysqli_fetch_assoc($res));
}
// Cycle through the point array, eliminating those points that "touch"
$rad = 1000; //radius in KM
for($i = 0; $i < count($arr); ++$i){
$lat1 = $arr[$i]['lat'];
$lon1 = $arr[$i]['long'];
for($j = 0; $j<count($arr); ++$j){
if($i != $j && isset($arr[$i]) && isset($arr[$j])){ // do not compare a point to itself
$lat2 = $arr[$j]['lat'];
$lon2 = $arr[$j]['long'];
// get the distance between each pair of points using the haversine formula
$dist = acos( sin($lat1*pi()/180)*sin($lat2*pi()/180) + cos($lat1*pi()/180)*cos($lat2*pi()/180)*cos($lon2*PI()/180-$lon1*pi()/180) ) * 6371;
if($dist < $rad){
echo "Removing point id:".$arr[$i]['id']."<br>";
unset($arr[$i]);
}
}
}
}
//display results
echo "Remaining points:<br>";
foreach($arr as $val){
echo "id=".$val['id']."<br>";
}
?>
The output of this code on the data you provided is:
Removing point id:1
Removing point id:2
Remaining points:
id=3
id=4
Note that this just removes the overlapping points, it doesn't do any averaging of positions. You could easily add that though. Hope this helps.
I want to solve the following problem:
minimize E[T]
subject to λi * pi - μi <= 0; for all i, i=1,...,n
(λ0 + sum(λi*(1-pi))) - μ0 <=0;
pi-1 <=0; for all i, i=1,...,n
pi => 0; for all i, i=1,...,n
where E(T) = (λ0 + sum(λi*(1-pi)) / ( ( λ0 + sum(λi) ) * μ0 -(λ0 + sum(λi*(1-pi) ) )) + sum((pi * λi) / ((λ0 + sum(λi)) * (μi - pi * λi)) )
where all sum goes from 1 to n
That's what we know about the parameters: n = 2, λ0 = 0, μ0 = 1, λ1 = free parameter, λ2 = 2, μ1 = μ2 = 2,
This problem can be handled as an inequality constrained minimization
problem.
I know that λ1 goes from 0 to 3 and what i want to get are p1 and p2. p1 and p2 are between 0 and 1.
And how can i choose the starting points? Or is this problem can be solved in Matlab?
I tried to to use fmincon with interior point algorithm in Matlab. But i don't really know how a linearly increasing parameter can be in nonlinear constraints.
If you can tell me suggestions or other functions that can handle this problem properly i would be pleased.
SELECT postcode, lat, lng,
truncate(
(degrees(acos
(sin(radians(lat))
*
sin( radians('.$latitude.'))
+
cos(radians(lat))
*
cos( radians('.$latitude.'))
*
cos( radians(lng - ('.$longitude.')))
)
)
* 69.172), 2)
as distance
FROM myData
This query calculates distance (in miles). But when I check distance for same lat and longitude at google maps my result doesnt match. If the distance is around 10 miles then my result is a bit accurate, but over that it gets wrong (for example, my result showed 13 miles and google showed 22 miles for same post code values)
I got this query from http://forums.mysql.com/read.php?23,3868,3868#msg-3868
How can I get it accurate. Any ideas?
Thanks for help.
UPDATE
I tried #Blixt code in PHP. Picked up 2 sample postcodes and their lats longs
//B28 9ET
$lat1 = 52.418819;
$long1 = -1.8481053;
//CV5 8BX
$lat2 = 52.4125573;
$long2 = -1.5407743;
$dtr = M_PI / 180;
$latA = $lat1 * $dtr;
$lonA = $long1 * $dtr;
$latB = $lat2 * $dtr;
$lonB = $long2 * $dtr;
$EarthRadius = 3958.76; //miles
echo $distance = $EarthRadius * acos(cos($latA) * cos($latB) * cos($lonB - $lonA) + sin($latA) * sin($latB));
Results:
My app - 12.95 miles
Google - 17.8 miles
Any ideas how to get it right?
Have a look at this source code. When I tested it against various other measurement services it seemed to get the same results. It's C# but the math should be easy enough to convert.
Here are the relevant pieces:
public const double EarthRadius = 6371.0072; // Kilometers
//public const double EarthRadius = 3958.76; // Miles
/* ... */
const double dtr = Math.PI / 180;
double latA = this.Latitude * dtr;
double lonA = this.Longitude * dtr;
double latB = other.Latitude * dtr;
double lonB = other.Longitude * dtr;
return GpsLocation.EarthRadius * Math.Acos(Math.Cos(latA) * Math.Cos(latB) * Math.Cos(lonB - lonA) + Math.Sin(latA) * Math.Sin(latB));
Note: The Earth is not perfectly spherical, so the constants used may differ. There is no easy way to make the measurements truly exact. See Wikipedia.
Original Question
I am looking for a function that attempts to quantify how "distant" (or distinct) two colors are. This question is really in two parts:
What color space best represents human vision?
What distance metric in that space best represents human vision (euclidean?)
Convert to La*b* (aka just plain "Lab", and you'll also see reference to "CIELAB"). A good quick measaure of color difference is
(L1-L2)^2 + (a1-a2)^2 + (b1-b2)^2
Color scientists have other more refined measures, which may not be worth the bother, depending on accuracy needed for what you're doing.
The a and b values represent opposing colors in a way similar to how cones work, and may be negative or positive. Neutral colors - white, grays are a=0,b=0. The L is brightness defined in a particular way, from zero (pure darkness) up to whatever.
Crude explanation :>> Given a color, our eyes distinguish between two broad ranges of wavelength - blue vs longer wavelengths. and then, thanks to a more recent genetic mutation, the longer wavelength cones bifurcated into two, distinguishing for us red vs. green.
By the way, it'll be great for your career to rise above your color caveman collegues who know of only "RGB" or "CMYK" which are great for devices but suck for serious perception work. I've worked for imaging scientists who didn't know a thing about this stuff!
For more fun reading on color difference theory, try:
http://white.stanford.edu/~brian/scielab/introduction.html and info
and links on color theory in general, websurf starting with http://www.efg2.com/Lab/Library/Color/ and
http://www.poynton.com/Poynton-color.html
More detail on Lab at http://en.kioskea.net/video/cie-lab.php3 I can't at this time find a non-ugly page that actually had the conversion formulas but I'm sure someone will edit this answer to include one.
as cmetric.htm link above failed for me, as well as many other implementations for color distance I found (after a very long jurney..) how to calculate the best color distance, and .. most scientifically accurate one: deltaE and from 2 RGB (!) values using OpenCV:
This required 3 color space conversions + some code conversion from javascript (http://svn.int64.org/viewvc/int64/colors/colors.js) to C++
And finally the code (seems to work right out of the box, hope no one finds a serious bug there ... but it seems fine after a number of tests)
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/photo/photo.hpp>
#include <math.h>
using namespace cv;
using namespace std;
#define REF_X 95.047; // Observer= 2°, Illuminant= D65
#define REF_Y 100.000;
#define REF_Z 108.883;
void bgr2xyz( const Vec3b& BGR, Vec3d& XYZ );
void xyz2lab( const Vec3d& XYZ, Vec3d& Lab );
void lab2lch( const Vec3d& Lab, Vec3d& LCH );
double deltaE2000( const Vec3b& bgr1, const Vec3b& bgr2 );
double deltaE2000( const Vec3d& lch1, const Vec3d& lch2 );
void bgr2xyz( const Vec3b& BGR, Vec3d& XYZ )
{
double r = (double)BGR[2] / 255.0;
double g = (double)BGR[1] / 255.0;
double b = (double)BGR[0] / 255.0;
if( r > 0.04045 )
r = pow( ( r + 0.055 ) / 1.055, 2.4 );
else
r = r / 12.92;
if( g > 0.04045 )
g = pow( ( g + 0.055 ) / 1.055, 2.4 );
else
g = g / 12.92;
if( b > 0.04045 )
b = pow( ( b + 0.055 ) / 1.055, 2.4 );
else
b = b / 12.92;
r *= 100.0;
g *= 100.0;
b *= 100.0;
XYZ[0] = r * 0.4124 + g * 0.3576 + b * 0.1805;
XYZ[1] = r * 0.2126 + g * 0.7152 + b * 0.0722;
XYZ[2] = r * 0.0193 + g * 0.1192 + b * 0.9505;
}
void xyz2lab( const Vec3d& XYZ, Vec3d& Lab )
{
double x = XYZ[0] / REF_X;
double y = XYZ[1] / REF_X;
double z = XYZ[2] / REF_X;
if( x > 0.008856 )
x = pow( x , .3333333333 );
else
x = ( 7.787 * x ) + ( 16.0 / 116.0 );
if( y > 0.008856 )
y = pow( y , .3333333333 );
else
y = ( 7.787 * y ) + ( 16.0 / 116.0 );
if( z > 0.008856 )
z = pow( z , .3333333333 );
else
z = ( 7.787 * z ) + ( 16.0 / 116.0 );
Lab[0] = ( 116.0 * y ) - 16.0;
Lab[1] = 500.0 * ( x - y );
Lab[2] = 200.0 * ( y - z );
}
void lab2lch( const Vec3d& Lab, Vec3d& LCH )
{
LCH[0] = Lab[0];
LCH[1] = sqrt( ( Lab[1] * Lab[1] ) + ( Lab[2] * Lab[2] ) );
LCH[2] = atan2( Lab[2], Lab[1] );
}
double deltaE2000( const Vec3b& bgr1, const Vec3b& bgr2 )
{
Vec3d xyz1, xyz2, lab1, lab2, lch1, lch2;
bgr2xyz( bgr1, xyz1 );
bgr2xyz( bgr2, xyz2 );
xyz2lab( xyz1, lab1 );
xyz2lab( xyz2, lab2 );
lab2lch( lab1, lch1 );
lab2lch( lab2, lch2 );
return deltaE2000( lch1, lch2 );
}
double deltaE2000( const Vec3d& lch1, const Vec3d& lch2 )
{
double avg_L = ( lch1[0] + lch2[0] ) * 0.5;
double delta_L = lch2[0] - lch1[0];
double avg_C = ( lch1[1] + lch2[1] ) * 0.5;
double delta_C = lch1[1] - lch2[1];
double avg_H = ( lch1[2] + lch2[2] ) * 0.5;
if( fabs( lch1[2] - lch2[2] ) > CV_PI )
avg_H += CV_PI;
double delta_H = lch2[2] - lch1[2];
if( fabs( delta_H ) > CV_PI )
{
if( lch2[2] <= lch1[2] )
delta_H += CV_PI * 2.0;
else
delta_H -= CV_PI * 2.0;
}
delta_H = sqrt( lch1[1] * lch2[1] ) * sin( delta_H ) * 2.0;
double T = 1.0 -
0.17 * cos( avg_H - CV_PI / 6.0 ) +
0.24 * cos( avg_H * 2.0 ) +
0.32 * cos( avg_H * 3.0 + CV_PI / 30.0 ) -
0.20 * cos( avg_H * 4.0 - CV_PI * 7.0 / 20.0 );
double SL = avg_L - 50.0;
SL *= SL;
SL = SL * 0.015 / sqrt( SL + 20.0 ) + 1.0;
double SC = avg_C * 0.045 + 1.0;
double SH = avg_C * T * 0.015 + 1.0;
double delta_Theta = avg_H / 25.0 - CV_PI * 11.0 / 180.0;
delta_Theta = exp( delta_Theta * -delta_Theta ) * ( CV_PI / 6.0 );
double RT = pow( avg_C, 7.0 );
RT = sqrt( RT / ( RT + 6103515625.0 ) ) * sin( delta_Theta ) * -2.0; // 6103515625 = 25^7
delta_L /= SL;
delta_C /= SC;
delta_H /= SH;
return sqrt( delta_L * delta_L + delta_C * delta_C + delta_H * delta_H + RT * delta_C * delta_H );
}
Hope it helps someone :)
HSL and HSV are better for human color perception. According to Wikipedia:
It is sometimes preferable in working with art materials, digitized images, or other media, to use the HSV or HSL color model over alternative models such as RGB or CMYK, because of differences in the ways the models emulate how humans perceive color. RGB and CMYK are additive and subtractive models, respectively, modelling the way that primary color lights or pigments (respectively) combine to form new colors when mixed.
The easiest distance would of course be to just consider the colors as 3d vectors originating from the same origin, and taking the distance between their end points.
If you need to consider such factors that green is more prominent in judging intensity, you can weigh the values.
ImageMagic provides the following scales:
red: 0.3
green: 0.6
blue: 0.1
Of course, values like this would only be meaningful in relation to other values for other colors, not as something that would be meaningful to humans, so all you could use the values for would be similiarity ordering.
Well, as a first point of call, I'd say of the common metrics HSV (Hue, Saturation and Value) or HSL are better representative of how humans perceive colour than say RGB or CYMK. See HSL, HSV on Wikipedia.
I suppose naively I would plot the points in the HSL space for the two colours and calculate the magnitude of the difference vector. However this would mean that bright yellow and bright green would be considered just as different as green to dark green. But then many consider red and pink two different colours.
Moreover, difference vectors in the same direction in this parameter space are not equal. For instance, the human eye picks up green much better than other colours. A shift in hue from green by the same amount as a shift from red may seem greater. Also a shift in saturation from a small amount to zero is the difference between grey and pink, elsewhere the shift would be the difference between two shades of red.
From a programmers point of view, you would need to plot the difference vectors but modified by a proportionality matrix that would adjust the lengths accordingly in various regions of the HSL space - this would be fairly arbitrary and would be based on various colour theory ideas but be tweaked fairly arbitrarily depending on what you wanted to apply this to.
Even better, you could see if anyone has already done such a thing online...
The Wikipedia article on color differences lists a number of color spaces and distance metrics designed to agree with human perception of color distances.
As someone who is color blind I believe it is good to try to add more separation then normal vision. The most common form of color blindness is red/green deficiency. It doesn't mean that you can't see red or green, it means that it is more difficult to see and more difficult to see the differences. So it takes a larger separation before a color blind person can tell the difference.