Pls suggest me an example for dotchart's hover functionality using graphael - hover

I am trying to create a dot chart(bubble chart) using gRaphael. But their documentation is not so clear on how to add hover effects when the user hovers on the dots.
Could any one pls suggest me some examples or give some help tips on the same.
Thanx in advance.

you have to use aDotChart.hoverDot() for registering a "dot hover listener". The following is the source code of example http://cancerbero.vacau.com/gwt/graphael4gwtGallery/?test=dot1 that do what you mention. It is Java code not javascript! but i think it can help you to make an idea of the javascript code:
double[] xs = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23};
double[] ys = {7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
double[] data = {294, 300, 204, 255, 348, 383, 334, 217, 114, 33, 44, 26, 41, 39,
52, 17, 13, 2, 0, 2, 5, 6, 64, 153, 294, 313, 195, 280, 365, 392, 340, 184,
87, 35, 43, 55, 53, 79, 49, 19, 6, 1, 0, 1, 1, 10, 50, 181, 246, 246, 220,
249, 355, 373, 332, 233, 85, 54, 28, 33, 45, 72, 54, 28, 5, 5, 0, 1, 2, 3,
58, 167, 206, 245, 194, 207, 334, 290, 261, 160, 61, 28, 11, 26, 33, 46, 36,
5, 6, 0, 0, 0, 0, 0, 0, 9, 9, 10, 7, 10, 14, 3, 3, 7, 0, 3, 4, 4, 6, 28, 24,
3, 5, 0, 0, 0, 0, 0, 0, 4, 3, 4, 4, 3, 4, 13, 10, 7, 2, 3, 6, 1, 9, 33, 32, 6,
2, 1, 3, 0, 0, 4, 40, 128, 212, 263, 202, 248, 307, 306, 284, 222, 79, 39, 26,
33, 40, 61, 54, 17, 3, 0, 0, 0, 3, 7, 70, 199};
String[] axisy = {"Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"};
String[] axisx = {"12am", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12pm", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11"};
DotOpts opts = new DotOpts();
opts.setSymbol("o");
opts.setMax(10);
opts.setHeat(true);
opts.setAxis("0 0 1 1");
opts.setAxisxlabels(axisx);
opts.setAxisylabels(axisy);
final DotChart c1 = getGPaper().dotchart(10, 10, 620, 260, xs, ys, data, opts);
c1.hoverDot(new DotHoverListener() {
#Override
public void hoverOut(DotContext ctx) {
if(tag!=null)
tag.remove();
}
#Override
public void hoverIn(DotContext ctx) {
tag = (GShape) getGPaper().tag(ctx.getX(), ctx.getY(), ctx.getValue()+"", 0, ctx.getR()+2).
insertBefore(ctx).show();
}
});

Thanks for your reply cancerbero.
Here is my java script solution,
dotChart.hover(function(){//onmouseover
dotChart.covers = r.set();
dotChart.covers.push(r.tag(this.x, this.y, this.value , 0, 10).insertBefore(this));
}, function(){//onmouseout
if(dotChart.covers!=null){
dotChart.covers.remove();
}
});
This has worked fine :)

Related

AWK: post-processing of the data based on two columns

I am dealing with the post-procession of CSV logs arranged in the multi-column format in the following order: the first column corresponds to the line number (ID), the second one contains its population (POP, the number of the samples fell into this ID) and the third column (dG) represent some inherent value of this ID (which is always negative):
ID, POP, dG
1, 7, -9.6000
2, 3, -8.7700
3, 6, -8.6200
4, 4, -8.2700
5, 6, -8.0800
6, 10, -8.0100
7, 9, -7.9700
8, 8, -7.8400
9, 16, -7.8100
10, 2, -7.7000
11, 1, -7.5600
12, 2, -7.5200
13, 9, -7.5100
14, 1, -7.5000
15, 2, -7.4200
16, 1, -7.3300
17, 1, -7.1700
18, 4, -7.1300
19, 3, -6.9200
20, 1, -6.9200
21, 2, -6.9100
22, 2, -6.8500
23, 10, -6.6900
24, 2, -6.6800
25, 1, -6.6600
26, 20, -6.6500
27, 1, -6.6500
28, 5, -6.5700
29, 3, -6.5500
30, 2, -6.4600
31, 2, -6.4500
32, 1, -6.3000
33, 7, -6.2900
34, 1, -6.2100
35, 1, -6.2000
36, 3, -6.1800
37, 1, -6.1700
38, 4, -6.1300
39, 1, -6.1000
40, 2, -6.0600
41, 3, -6.0600
42, 8, -6.0200
43, 2, -6.0100
44, 1, -6.0100
45, 1, -5.9800
46, 2, -5.9700
47, 1, -5.9300
48, 6, -5.8800
49, 4, -5.8300
50, 4, -5.8000
51, 2, -5.7800
52, 3, -5.7200
53, 1, -5.6600
54, 1, -5.6500
55, 4, -5.6400
56, 2, -5.6300
57, 1, -5.5700
58, 1, -5.5600
59, 1, -5.5200
60, 1, -5.5000
61, 3, -5.4200
62, 4, -5.3600
63, 1, -5.3100
64, 5, -5.2500
65, 5, -5.1600
66, 1, -5.1100
67, 1, -5.0300
68, 1, -4.9700
69, 1, -4.7700
70, 2, -4.6600
In order to reduce the number of the lines I filtered this CSV with the aim to search for the line with the highest number in the second column (POP), using the following AWK expression:
# search CSV for the line with the highest POP and save all lines before it, while keeping minimal number of the lines (3) in the case if this line is found at the beginning of CSV.
awk -v min_lines=3 -F ", " 'a < $2 {for(idx=0; idx < i; idx++) {print arr[idx]} print $0; a=int($2); i=0; printed=NR} a > $2 && NR > 1 {arr[i]=$0; i++}END{if(printed <= min_lines) {for(idx = 0; idx <= min_lines - printed; idx++){print arr[idx]}}}' input.csv > output.csv
thus obtaining the following reduced output CSV, which still has many lines since the search string (with highest POP) is located on 26th line:
ID, POP, dG
1, 7, -9.6000
2, 3, -8.7700
3, 6, -8.6200
4, 4, -8.2700
5, 6, -8.0800
6, 10, -8.0100
7, 9, -7.9700
8, 8, -7.8400
9, 16, -7.8100
10, 2, -7.7000
11, 1, -7.5600
12, 2, -7.5200
13, 9, -7.5100
14, 1, -7.5000
15, 2, -7.4200
16, 1, -7.3300
17, 1, -7.1700
18, 4, -7.1300
19, 3, -6.9200
20, 1, -6.9200
21, 2, -6.9100
22, 2, -6.8500
23, 10, -6.6900
24, 2, -6.6800
25, 1, -6.6600
26, 20, -6.6500
How it would be possible to further customize my filter via modifying my AWK expression (or pipe it to something else) in order to consider additionally only the lines with small difference in the negative value of the third column, dG compared to the first line (which has the value most negative)? For example to consider only the lines different no more then 20% in terms of dG compared to the first line, while keeping all rest conditions the same:
ID, POP, dG
1, 7, -9.6000
2, 3, -8.7700
3, 6, -8.6200
4, 4, -8.2700
5, 6, -8.0800
6, 10, -8.0100
7, 9, -7.9700
8, 8, -7.8400
9, 16, -7.8100
10, 2, -7.7000
Both tasks can be done in a single awk:
awk -F ', ' 'NR==1 {next} FNR==NR {if (max < $2) {max=$2; n=FNR}; if (FNR==2) dg = $3 * .8; next} $3+0 == $3 && (FNR == n+1 || $3 > dg) {exit} 1' file file
ID, POP, dG
1, 7, -9.6000
2, 3, -8.7700
3, 6, -8.6200
4, 4, -8.2700
5, 6, -8.0800
6, 10, -8.0100
7, 9, -7.9700
8, 8, -7.8400
9, 16, -7.8100
10, 2, -7.7000
To make it more readable:
awk -F ', ' '
NR == 1 {
next
}
FNR == NR {
if (max < $2) {
max = $2
n = FNR
}
if (FNR == 2)
dg = $3 * .8
next
}
$3 + 0 == $3 && (FNR == n+1 || $3 > dg) {
exit
}
1' file file

AWK: filtering of the multi-column data

I am dealing with the post-procession of CSV log filles arranged in the multi-column format. Usually the first column corresponds to the line number (ID), the second one containts its population (POP, the number of the samples fell into this ID) and the third column (dG) represent some inherent value of this ID (which is always negative):
ID, POP, dG
1, 7, -6.9700
2, 2, -6.9500
3, 2, -6.8500
4, 6, -6.7200
5, 14, -6.7100
6, 5, -6.7000
7, 10, -6.5600
8, 10, -6.4800
9, 7, -6.4500
10, 3, -6.4400
11, 8, -6.4300
12, 10, -6.4200
13, 3, -6.3300
14, 7, -6.2200
15, 1, -6.2000
16, 3, -6.2000
17, 4, -6.1700
18, 1, -6.0500
19, 9, -6.0200
20, 1, -6.0100
21, 1, -6.0000
22, 3, -5.9900
23, 4, -5.9800
24, 3, -5.9200
25, 2, -5.9100
26, 1, -5.8900
27, 1, -5.8500
28, 1, -5.8200
29, 1, -5.7900
30, 8, -5.7800
31, 1, -5.7800
32, 1, -5.7200
33, 3, -5.7100
34, 2, -5.7100
35, 1, -5.6900
36, 4, -5.6800
37, 2, -5.6500
38, 4, -5.6100
39, 1, -5.5900
40, 1, -5.5600
41, 1, -5.5500
42, 2, -5.5500
43, 1, -5.5200
44, 1, -5.5100
45, 2, -5.5000
46, 1, -5.5000
47, 3, -5.4700
48, 2, -5.4500
49, 1, -5.4500
50, 4, -5.4300
51, 1, -5.4300
52, 1, -5.3800
53, 2, -5.3800
54, 1, -5.3500
55, 1, -5.2800
56, 1, -5.2500
57, 2, -5.2500
58, 2, -5.2400
59, 2, -5.2300
60, 1, -5.1400
61, 1, -5.1100
62, 1, -5.1000
63, 2, -5.0300
64, 2, -5.0100
65, 2, -5.0100
66, 1, -4.9700
67, 1, -4.9200
68, 1, -4.9000
69, 2, -4.9000
70, 1, -4.8900
71, 1, -4.8600
72, 3, -4.7900
73, 2, -4.7900
74, 1, -4.7900
75, 1, -4.7700
76, 2, -4.7600
77, 1, -4.7500
78, 1, -4.7400
79, 1, -4.7300
80, 1, -4.7200
81, 2, -4.7100
82, 1, -4.6800
83, 2, -4.6300
84, 1, -4.5500
85, 1, -4.5000
86, 1, -4.4800
87, 2, -4.4500
88, 1, -4.4300
89, 1, -4.3900
90, 1, -4.3000
91, 1, -4.2500
92, 1, -4.2300
93, 1, -4.2200
94, 2, -4.1600
95, 1, -4.1500
96, 1, -4.1100
97, 1, -4.0300
98, 1, -4.0100
I need to reduce the total number of these lines, keeping in the output CSV only the first N lines from the first to the line with the biggest population (POP, the value of the second column) observed in the whole dataset. So in my example the expected output should be the first 5 lines, since the 5th ID has the biggest value of the second column (POP) compared to the all 98 lines:
ID, POP, dG
1, 7, -6.9700
2, 2, -6.9500
3, 2, -6.8500
4, 6, -6.7200
5, 14, -6.7100
Could you suggest me some AWK solution which would accept my CSV file and produce new one after such filtering based on the values in the second column?
You could try this awk command:
awk -F "," 'a < $2 {for(idx=0; idx < i; idx++) {print arr[idx]} print $0; a=int($2); i=0} a > $2 && NR > 1 {arr[i]=$0; i++}' input
See demo at: https://awk.js.org/?gist=c8751cc25e444fb2e2b1a8f29849f127
This approach processed the file twice: once to find the max, and again to print the lines up to the max. I've incorporated your request to print a minimum number of lines.
awk -F ', ' -v min_lines=5 '
NR == FNR {
if ($2 > max) max=$2
next
}
{print}
$2 == max {
for (i = FNR; i <= min_lines; i++) {
getline
print
}
exit
}
' file.csv file.csv
$ awk -F, -v minlines=5 'NR==FNR { if($2>=max && NR>1) {max=$2; maxi=NR} next }
FNR<=minlines+1 || FNR<=maxi' file{,}
ID, POP, dG
1, 7, -6.9700
2, 2, -6.9500
3, 2, -6.8500
4, 6, -6.7200
5, 14, -6.7100
this will print until the last occurrence of the max value. If you want the first instance change $2>=max to $2>max

AWK: multi-step filtering of data based on the selected column

I am dealing with the post-processing of multi-column CSV arranged in fixed format: the first column corresponds to the line number (ID), the second one contains its population (POP, the number of the samples fell into this ID) and the third column (dG) represent some inherent value of this ID (always negative):
ID, POP, dG
1, 7, -9.6000
2, 3, -8.7700
3, 6, -8.6200
4, 4, -8.2700
5, 6, -8.0800
6, 10, -8.0100
7, 9, -7.9700
8, 8, -7.8400
9, 16, -7.8100
10, 2, -7.7000
11, 1, -7.5600
12, 2, -7.5200
13, 9, -7.5100
14, 1, -7.5000
15, 2, -7.4200
16, 1, -7.3300
17, 1, -7.1700
18, 4, -7.1300
19, 3, -6.9200
20, 1, -6.9200
21, 2, -6.9100
22, 2, -6.8500
23, 10, -6.6900
24, 2, -6.6800
25, 1, -6.6600
26, 20, -6.6500
27, 1, -6.6500
28, 5, -6.5700
29, 3, -6.5500
30, 2, -6.4600
31, 2, -6.4500
32, 1, -6.3000
33, 7, -6.2900
34, 1, -6.2100
35, 1, -6.2000
36, 3, -6.1800
37, 1, -6.1700
38, 4, -6.1300
39, 1, -6.1000
40, 2, -6.0600
41, 3, -6.0600
42, 8, -6.0200
43, 2, -6.0100
44, 1, -6.0100
45, 1, -5.9800
46, 2, -5.9700
47, 1, -5.9300
48, 6, -5.8800
49, 4, -5.8300
50, 4, -5.8000
51, 2, -5.7800
52, 3, -5.7200
53, 1, -5.6600
54, 1, -5.6500
55, 4, -5.6400
56, 2, -5.6300
57, 1, -5.5700
58, 1, -5.5600
59, 1, -5.5200
60, 1, -5.5000
61, 3, -5.4200
62, 4, -5.3600
63, 1, -5.3100
64, 5, -5.2500
65, 5, -5.1600
66, 1, -5.1100
67, 1, -5.0300
68, 1, -4.9700
69, 1, -4.7700
70, 2, -4.6600
In order to reduce the number of the lines I filtered this CSV with the aim to search for the line with the highest number in the second column (POP), using the following AWK expression:
# search CSV for the line with the highest POP and save all linnes before it, while keeping minimal number of the linnes (3) in the case if this line is found at the begining of CSV.
awk -v min_lines=3 -F ", " 'a < $2 {for(idx=0; idx < i; idx++) {print arr[idx]} print $0; a=int($2); i=0; printed=NR} a > $2 && NR > 1 {arr[i]=$0; i++}END{if(printed <= min_lines) {for(idx = 0; idx <= min_lines - printed; idx++){print arr[idx]}}}' input.csv > output.csv
For simple case when the string with maximum POP is located on the first line, the script will save this line (POP max) +2 lines after it(=min_lines=3).
For more complicated case, if the line with POP max is located in the middle of the CSV, the script detect this line + all the precedent lines from the begining of the CSV and list them in the new CSV keeping the original order. However, in that case output.csv would contain too many lines since the search string (with highest POP) is located on 26th line:
ID, POP, dG
1, 7, -9.6000
2, 3, -8.7700
3, 6, -8.6200
4, 4, -8.2700
5, 6, -8.0800
6, 10, -8.0100
7, 9, -7.9700
8, 8, -7.8400
9, 16, -7.8100
10, 2, -7.7000
11, 1, -7.5600
12, 2, -7.5200
13, 9, -7.5100
14, 1, -7.5000
15, 2, -7.4200
16, 1, -7.3300
17, 1, -7.1700
18, 4, -7.1300
19, 3, -6.9200
20, 1, -6.9200
21, 2, -6.9100
22, 2, -6.8500
23, 10, -6.6900
24, 2, -6.6800
25, 1, -6.6600
26, 20, -6.6500
In order to reduce the total number of the lines up to 3-5 lines in the output CSV, how it would be possible to customize my filter in order to save only the lines with a minor difference (e.g. the values in the pop column should match (POP >0.5 max(POP)) ), while comparing each line with the line having bigest value in the POP column? Finally, I need always to keep the first line as well as the line with the maximal value in the output. So the AWK solution should filter multi-string CSV in the following manner (please ignore coments in #):
ID, POP, dG
1, 7, -9.6000
9, 16, -7.8100
26, 20, -6.6500 # this is POP max detected over all lines
This 2 phase awk should work for you:
awk -F ', ' -v n=2 'NR == 1 {next}
FNR==NR { if (max < $2) {max=$2; if (FNR==n) n++} next}
FNR <= n || $2 > (.5 * max)' file file
ID, POP, dG
1, 7, -9.6000
9, 16, -7.8100
26, 20, -6.6500

Timestamp in Google Spreadsheets not working properly anymore

Since last weeks update in Google Spreadsheets, this code is not working properly anymore. Only when a user ENTERS data, timestamp appears, when a user PASTE data, nothing happens.
This is the code we use:
function onEditSwapsheet(e) {
var s = e.source.getActiveSheet(),
sheets = ["5B","10B","15B","20B","25B","30B","35B","40B","50B","60B","70B","80B","90B","100B" ],
watchCols = [6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100, 102, 104, 106, 108, 110,],
offsetCol = [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,],
sheetInd = sheets.indexOf(s.getName()),
colInd = watchCols.indexOf(e.range.columnStart);
if (sheetInd === -1 || colInd === -1 ) return;
e.range.offset(0, offsetCol[colInd])
.setValue(!e.value ? null : Utilities.formatDate(new Date(), "GMT+0100", "HH:mm dd/MM"))
}

Update query from multiple tables, for specific IDs and JOIN LIMIT 1

I have two tables, and I want to update the rows of torrents from scrapes every day.
scrapes:
id, torrent_id, name, status, complete, incomplete, downloaded
1, 1, http://tracker1.com, 1, 542, 23, 542
2, 1, http://tracker2.com, 1, 542, 23, 542
3, 2, http://tracker1.com, 1, 123, 34, 43
4, 2, http://tracker2.com, 1, 123, 34, 43
5, 3, http://tracker1.com, 1, 542, 23, 542
6, 3, http://tracker2.com, 1, 542, 23, 542
7, 4, http://tracker1.com, 1, 123, 34, 43
8, 4, http://tracker2.com, 1, 123, 34, 43
9, 5, http://tracker1.com, 1, 542, 23, 542
10, 5, http://tracker2.com, 1, 542, 23, 542
11, 6, http://tracker1.com, 1, 123, 34, 43
12, 6, http://tracker2.com, 1, 123, 34, 43
torrents:
id, name, complete, incomplete, downloaded
1, CentOS, 0, 0, 0
2, Ubuntu, 0, 0, 0
3, Debian, 0, 0, 0
4, Redhat, 0, 0, 0
5, Fedora, 0, 0, 0
6, Gentoo, 0, 0, 0
The scrapes may have multiple name, but I want to get the values only from the first found (for better performance) and also, I need to update only torrents ids 1, 3, 6 on one query.
UPDATE (SELECT * FROM scrapes WHERE torrent_id IN(1,3,6) GROUP BY torrent_id) as `myview` JOIN torrents ON myview.torrent_id=torrents.id SET torrent.complete=myview.complete WHERE 1