How to find duplicate set of mysql ids from bash script - mysql

+---------+-------------+--------------+
|table_uid| group_uid | product_uid |
+---------+-------------+--------------+
| 8901 | 5206 | 184 |
| 8902 | 5206 | 523 |
| 9194 | 5485 | 184 |
| 9195 | 5485 | 523 |
| 7438 | 1885 | 184 |
| 7439 | 1885 | 184 |
+---------+-------------+--------------+
My goal here is to show any group_uids that contain the same exact set of product_uids. So group_uid 5206 and 5485 would end up displaying while 1885 would not since it does not have the same set of product_uids. I have to accomplish this through bash script as I do not have the ability to do this in MySQL 3.23.58 (yes, it's horribly old and I hate it but not my choice). I'm trying to store the table_uid, group_uid and product_uid in an array and then compare each group_uid to see if they contain the same product_uids. Help is appreciated!

How are you displaying the table above?
You might have some success with GROUP_CONCAT Can I concatenate multiple MySQL rows into one field?
SELECT group_uid, GROUP_CONCAT(product_uid,separator ','), count(*)
FROM <tab>
GROUP BY group_uid
HAVING count(*) > 1
I'm not sure how it would order the strings as I don't have mysql at present

Here's a bit of awk to collect the groups that belong to the same set of products:
awk '
NR > 3 && NF == 7 {
prod[$4] = prod[$4] $6 " "
}
END {
for (group in prod)
groups[prod[group]] = groups[prod[group]] group " "
for (key in groups)
print key ":" groups[key]
}
' mysql.out
184 523 :5206 5485
184 184 :1885
If you know which set of product ids you're interested in, you can pass that in:
awk -v prod_ids="184 523" '
NR > 3 && NF == 7 {
prod[$4] = prod[$4] $6 " "
}
END {
for (group in prod)
groups[prod[group]] = groups[prod[group]] group " "
key = prod_ids " "
if (key in groups)
print groups[key]
}
' mysql.out
5206 5485

Related

CSV output file using command line for wireshark IO graph statistics

I save the IO graph statistics as CSV file containing the bits per second using the wireshark GUI. Is there a way to generate this CSV file with command line tshark? I can generate the statistics on command line as bytes per second as follows
tshark -nr test.pcap -q -z io,stat,1,BYTES
How do I generate bits/second and save it to a CSV file?
Any help is appreciated.
I don't know a way to do that using only tshark, but you can easily parse the output from tshark into a CSV file:
tshark -nr tmp.pcap -q -z io,stat,1,BYTES | grep -P "\d+\s+<>\s+\d+\s*\|\s+\d+" | awk -F '[ |]+' '{print $2","($5*8)}'
Explanations
grep -P "\d+\s+<>\s+\d+\s*\|\s+\d+" selects only the raw from the tshark output with the actual data (i.e., second <> second | transmitted bytes).
awk -F '[ |]+' '{print $2","($5*8)}' splits that data into 5 blocks with [ |]+ as the separator and display blocks 2 (the second at which starts the interval) and 5 (the transmitted bytes) with a comma between them.
Another thing that may be good to know:
If you change the interval from 1 second to 0.5 seconds, then you have to allow . in the grep part by adding \. between two digits \d .
Otherwise the result will be an empty *.csv file.
grep -P "\d{1,2}\.{1}\d{1,2}\s+<>\s+\d{1,2}\.{1}\d{1,2}\s*\|\s+\d+"
The answers in this thread gave me the keys to solving a similar problem with tshark io stats and I wanted to share the results and how it works. In my case, the task was to convert multiple columns of tshark io stat records with potential decimals in the data. This answer converts multiple data columns to csv, adds rudimentary headers, accounts for decimals in fields and variable numbers of spaces.
Complete command string
tshark -r capture.pcapng -q -z io,stat,30,,FRAMES,BYTES,"FRAMES()ip.src == 10.10.10.10","BYTES()ip.src == 10.10.10.10","FRAMES()ip.dst == 10.10.10.10","BYTES()ip.dst == 10.10.10.10" \
| grep -P "\d+\.?\d*\s+<>\s+|Interval +\|" \
| tr -d " " | tr "|" "," | sed -E 's/<>/,/; s/(^,|,$)//g; s/Interval/Start,Stop/g' > somefile.csv
Explanation
The command string has 3 major parts.
tshark creates the report with the data in columns
Extract the desired lines with grep
Use tr and sed to convert the records grep matched into a csv delimited file.
Part 1: tshark creates the report with the data in columns
tshark is run with -z io,stat at a 30 second interval, counting frames and bytes with various filters.
tshark -r capture.pcapng -q -z io,stat,30,,FRAMES,BYTES,"FRAMES()ip.src == 10.10.10.10","BYTES()ip.src == 10.10.10.10","FRAMES()ip.dst == 10.10.10.10","BYTES()ip.dst == 10.10.10.10"
Here is the output when run against my test pcap file:
=================================================================================================
| IO Statistics |
| |
| Duration: 179.179180 secs |
| Interval: 30 secs |
| |
| Col 1: Frames and bytes |
| 2: FRAMES |
| 3: BYTES |
| 4: FRAMES()ip.src == 10.10.10.10 |
| 5: BYTES()ip.src == 10.10.10.10 |
| 6: FRAMES()ip.dst == 10.10.10.10 |
| 7: BYTES()ip.dst == 10.10.10.10 |
|-----------------------------------------------------------------------------------------------|
| |1 |2 |3 |4 |5 |6 |7 |
| Interval | Frames | Bytes | FRAMES | BYTES | FRAMES | BYTES | FRAMES | BYTES |
|-----------------------------------------------------------------------------------------------|
| 0 <> 30 | 107813 | 120111352 | 107813 | 120111352 | 26682 | 15294257 | 80994 | 104808983 |
| 30 <> 60 | 122437 | 124508575 | 122437 | 124508575 | 49331 | 17080888 | 73017 | 107422509 |
| 60 <> 90 | 138999 | 135488315 | 138999 | 135488315 | 54829 | 22130920 | 84029 | 113348686 |
| 90 <> 120 | 158241 | 217781653 | 158241 | 217781653 | 42103 | 15870237 | 115971 | 201901201 |
| 120 <> 150 | 111708 | 131890800 | 111708 | 131890800 | 43709 | 18800647 | 67871 | 113082296 |
| 150 <> Dur | 123736 | 142639416 | 123736 | 142639416 | 50754 | 22053280 | 72786 | 120574520 |
=================================================================================================
Considerations
Looking at this output, we can see several items to consider:
Rows with data have a unique sequence in the Interval column of "space<>space", which we will can use for matching.
We want the header line, so we will use the word "Interval" followed by spaces and then a "|" character.
The number of spaces in a column are variable depending on the number of digits per measurement.
The Interval column gives both the time from 0 and the from the first measurement. Either can be used, so we will keep both and let the user decide.
When using milliseconds there will be decimals in the Interval field
Depending on the statistic requested, there may be decimals in the data columns
The use of "|" as delimiters will require escaping in any regex statement that covers them.
Part 2: Extract the desired lines with grep
Once tshark produces output, we use grep with regex to extract the lines we want to save.
grep -P "\d+\.?\d*\s+<>\s+|Interval +\|""
grep will use the "Digit(s)Space(s)<>Space(s)" character sequence in the Interval column to match the lines with data. It also uses an OR to grab the header by matching the characters "Interval |".
grep -P # The "-P" flag turns on PCRE regex matching, which is not the same as egrep. With egrep, you will need to change the escaping.
"\d+ # Match on 1 or more Digits. This is the 1st set of numbers in the Interval column.
\.? # 0 or 1 Periods. We need this to handle possible fractional seconds.
\d* # 0 or more Digits. To handle possible fractional seconds.
\s+<>\s+ # 1 or more Spaces followed by the Characters "<>", then 1 or more Spaces.
| # Since this is not escaped, it is a regex OR
Interval\s+\|" # Match the String "Interval" followed by 1 or more Spaces and a literal "|".
From the tshark output, grep matched these lines:
| Interval | Frames | Bytes | FRAMES | BYTES | FRAMES | BYTES | FRAMES | BYTES |
| 0 <> 30 | 107813 | 120111352 | 107813 | 120111352 | 26682 | 15294257 | 80994 | 104808983 |
| 30 <> 60 | 122437 | 124508575 | 122437 | 124508575 | 49331 | 17080888 | 73017 | 107422509 |
| 60 <> 90 | 138999 | 135488315 | 138999 | 135488315 | 54829 | 22130920 | 84029 | 113348686 |
| 90 <> 120 | 158241 | 217781653 | 158241 | 217781653 | 42103 | 15870237 | 115971 | 201901201 |
| 120 <> 150 | 111708 | 131890800 | 111708 | 131890800 | 43709 | 18800647 | 67871 | 113082296 |
| 150 <> Dur | 123736 | 142639416 | 123736 | 142639416 | 50754 | 22053280 | 72786 | 120574520 |
Part 3: Use tr and sed to convert the records grep matched into a csv delimited file.
tr and sed are used for converting the lines grep matched into csv. tr does the bulk work of removing spaces and changing the "|" to ",". This is simpler and faster then using sed. However, sed is used for some cleanup work
tr -d " " | tr "|" "," | sed -E 's/<>/,/; s/(^,|,$)//g; s/Interval/Start,Stop/g'
Here is how these commands perform the conversion. The first trick is to get rid of all of the spaces. This means we dont have to account for them in any regex sequences, making the rest of the work simpler
| tr -d " " # Spaces are in the way, so delete them.
| tr "|" "," # Change all "|" Characters to ",".
| sed -E 's/<>/,/; # Change "<>" to "," splitting the Interval column.
s/(^,|,$)//g; # Delete leading and/or trailing "," on each line.
s/Interval/Start,Stop/g' # Each of the "Interval" columns needs a header, so change the text "Interval" into two words with a , separating them.
> somefile.csv # Pipe the output into somefile.csv
Final result
Once through this process, we have a csv output that can now be imported into your favorite csv tool, spreadsheet, or fed to a graphing program like gnuplot.
$cat somefile.csv
Start,Stop,Frames,Bytes,FRAMES,BYTES,FRAMES,BYTES,FRAMES,BYTES
0,30,107813,120111352,107813,120111352,26682,15294257,80994,104808983
30,60,122437,124508575,122437,124508575,49331,17080888,73017,107422509
60,90,138999,135488315,138999,135488315,54829,22130920,84029,113348686
90,120,158241,217781653,158241,217781653,42103,15870237,115971,201901201
120,150,111708,131890800,111708,131890800,43709,18800647,67871,113082296
150,Dur,123736,142639416,123736,142639416,50754,22053280,72786,120574520

bash - extract data from mysql table (GROUP BY)- how to process

I have mySQL table:
+----+---------------------+-------+
| id | timestamp | value |
+----+---------------------+-------+
| 1 | 2016-03-29 18:53:28 | 1 |
| 2 | 2016-03-29 20:26:06 | 1 |
| 3 | 2016-03-29 20:26:22 | 1 |
+----+---------------------+-------+
3 rows in set (0.00 sec)
It is a table to hold water consumption data (each 1 in value is a 1 liter of water).
I wrote a bash script to extract data - sum of litres of water by months.
watersum=`echo " SELECT MONTHNAME(timestamp), SUM(value) FROM woda GROUP BY YEAR(timestamp), MONTH(timestamp);" | mysql -s -u$SQUSER -p$SQPASS -h$SQHOST $SQLDB`
echo $watersum
gives me:
March 693 April 9768 May 11277 June 11987 July 10047 August 8570
I would like to save this data in json file. How do convert the string in $watersum to a json string?
Make watersum an array
watersum=( $(echo " SELECT MONTHNAME(timestamp), SUM(value) FROM woda GROUP BY YEAR(timestamp), MONTH(timestamp);" | mysql -s -u$SQUSER -p$SQPASS -h$SQHOST $SQLDB) )
echo "{" && for((i=0;i<"${#watersum[#]}";i+=2))
do
echo -n "\"${watersum[$i]}\":\"${watersum[((i+1))]}\"";
(( (i+2) == "${#watersum[#]}" )) || echo ","
done && echo;echo "}"
Output
{
"March":"693",
"April":"9768",
"May":"11277",
"June":"11987",
"July":"10047",
"August":"8570"
}

Count and select all dates for a specific field in MySQL

i have a data format like this:
+----+--------+---------------------+
| ID | utente | data |
+----+--------+---------------------+
| 1 | Man1 | 2014-02-10 12:12:00 |
+----+--------+---------------------+
| 2 | Women1 | 2015-02-10 12:12:00 |
+----+--------+---------------------+
| 3 | Man2 | 2016-02-10 12:12:00 |
+----+--------+---------------------+
| 4 | Women1 | 2014-03-10 12:12:00 |
+----+--------+---------------------+
| 5 | Man1 | 2014-04-10 12:12:00 |
+----+--------+---------------------+
| 6 | Women1 | 2014-02-10 12:12:00 |
+----+--------+---------------------+
I want to make a report that organise the ouptout in way like this:
+---------+--------+-------+---------------------+---------------------+---------------------+
| IDs | utente | count | data1 | data2 | data3 |
+---------+--------+-------+---------------------+---------------------+---------------------+
| 1, 5 | Man1 | 2 | 2014-02-10 12:12:00 | 2014-04-10 12:12:00 | |
+---------+--------+-------+---------------------+---------------------+---------------------+
| 2, 4, 6 | Women1 | 3 | 2015-02-10 12:12:00 | 2014-03-10 12:12:00 | 2014-05-10 12:12:00 |
+---------+--------+-------+---------------------+---------------------+---------------------+
All the row thath include the same user (utente) more than one time will be included in one row with all the dates and the count of records.
Thanks
While it's certainly possible to write a query that returns the data in the format you want, I would suggest you to use a GROUP BY query and two GROUP_CONCAT aggregate functions:
SELECT
GROUP_CONCAT(ID) as IDs,
utente,
COUNT(*) as cnt,
GROUP_CONCAT(data ORDER BY data) AS Dates
FROM
tablename
GROUP BY
utente
then at the application level you can split your Dates field to multiple columns.
Looks like a fairly standard "Breaking" report, complicated only by the fact that your dates extend horizontally instead of down...
SELECT * FROM t ORDER BY utente, data
$lastutente = $lastdata = '';
echo "<table>\n";
while ($row = fetch()) {
if ($lastutente != $row['utente']) {
if ($lastutente != '') {
/****
* THIS SECTION REF'D BELOW
***/
echo "<td>$cnt</td>\n";
foreach ($datelst[] as $d)
echo "<td>$row[data]</td>\n";
for ($i = count($datelst); $i < $NumberOfDateCells; $i++)
echo "<td> </td>\n";
echo "</tr>\n";
/****
* END OF SECTION REF'D BELOW
***/
}
echo "<tr><td>$row[utente]</td>\n"; // start a new row - you probably want to print other stuff too
$datelst = array();
$cnt = 0;
}
if ($lastdata != $row['data']) {
datelst[] = $row['data'];
}
$cnt += $row['cnt']; // or $cnt++ if it's one per row
}
print the end of the last row - see SECTION REF'D ABOVE
echo "</table>\n";
You could add a GROUP BY utente, data to your query above to put a little more load on mysql and a little less on your code - then you should have SUM(cnt) as cnt or COUNT(*) as cnt.

How to Set Variables and Process Variable for MySQL in a Perl Script

HERE IS MY TABLE EXAMPLE:
Id | Time | Predicted | Actual | High
----+------------+------------+----------+---------
1 | 01:00:00 | 100 | 100 | NULL
2 | 02:00:00 | 200 | 50 | NULL
3 | 03:00:00 | 150 | 100 | NULL
4 | 04:00:00 | 180 | 80 | NULL
I want to find highest value in Predicted and place it in the 'High' column (IN A SPECIFIC ROW)
========= USING THE FOLLOWING SYNTAX I AM ABLE TO ACHIEVE THIS MANUALLY IN SQL WITH THE FOLLOWING:
SET #peak=(SELECT MAX(Predicted) FROM table);
UPDATE table SET Peak=#peak WHERE Id='1';
Id | Time | Predicted | Actual | High
----+------------+------------+-----------+---------
1 | 01:00:00 | 100 | 100 | 200
2 | 02:00:00 | 200 | 50 | NULL
3 | 03:00:00 | 150 | 100 | NULL
4 | 04:00:00 | 180 | 80 | NULL
=======================================
However, when I attempt to use the above syntax in a Perl script it fails due to the '#" or any variable symbol. Here is the Perl syntax I attempted to overcome the variable issue with no real favourable results. This is true even when placing the #peak variable in the 'execute(#peak) with ? in the pre-leading syntax' parameter:
my $Id_var= '1';
my $sth = $dbh->prepare( 'set #peak = (SELECT MAX(Predicted) FROM table)' );
my $sti = $dbh->prepare ( "UPDATE table SET Peak = #peak WHERE Id = ? " );
$sth->execute;
$sth->finish();
$sti->execute('$Id_var');
$sti->finish();
$dbh->commit or die $DBI::errstr;
With the following error:
Global symbol "#peak" requires explicit package name
I would appreciate any help to get this working within my Perl script.
You need to escape the # symbal (which denotes an array variable) or use single quotes, eg
my $sti = $dbh->prepare ( "UPDATE table SET Peak = \#peak WHE...
Or, use a single quote
my $sti = $dbh->prepare ( 'UPDATE table SET Peak = #peak WHE...
Perl sees #peak as an array. Try referring to it as \#peak. The back slash means interpret next character literally.

psql to csv file '-' becomes '-0'

I want to output the results from a psql query to a csv file. I have used the following approach
\o test.csv
SELECT myo_date, myo_maps_study, cbp_lvef, cbp_rvef, myx_ecg_posneg, myx_st, std_drugs, std_reason_comment FROM myo INNER JOIN studies ON (myo_std_uid = std_uid) LEFT OUTER JOIN cbp on (std_uid = cbp_std_uid) LEFT OUTER JOIN myx on (std_uid = myx_std_uid) WHERE myo_maps_study ~ 'MYO[0-9]*\$' AND std_reason_comment ~ 'AF' AND cbp_lvef is not null AND myx_st IS NOT NULL AND std_drugs IS NOT NULL ORDER by myo_date DESC LIMIT 500;
\q
The results on the query on its own is as follows
06/11/2013 | MYO134537 | 36.75000 | 29.00000 | - | 0.0 | ASPIRIN;BISOPROLOL;LISINOPRIL;METFORMIN;PPI;STATIN;FLUOXETINE;AMLODIPINE;GTN | CPOE;AF;T2DM;POSET
31/10/2013 | MYO130555 | 45.00000 | 36.25000 | - | 0.0 | DILTIAZEM;STATIN;LISINOPRIL;ASPIRIN;FRUSEMIDE;SALBUTAMOL;PARACETAMOL;AMOXICILLIN | TROP-VE; CP; AF; CTPA-VE; ANT T; INV; RF
23/10/2013 | MYO130538 | 18.75000 | 18.50000 | + | -1.0 | ASPIRIN;BISOPROLOL;RAMIPRIL | AF;MR;QLVFN;FAILED CARDIOVERSION
18/10/2013 | MYO134510 | 39.50000 | 32.25000 | - | 0.0 | ASPIRIN;STATIN;CO-CODAMOL;BISOPROLOL;GTN;PPI | PVD;AF
18/10/2013 | MYO130537 | 19.00000 | 18.00000 | - | 0.0 | STATIN;RAMIPRIL;AMLODIPINE;WARFARIN;(METOPROLOL-STOPPED FOR TEST) | TIA;AF;RF+++;ETINAP
However the csv file (opened in open office) looks like this
06/11/2013 MYO134537 36.75 29 -0 0 ASPIRIN;BISOPROLOL;LISINOPRIL;METFORMIN;PPI;STATIN;FLUOXETINE;AMLODIPINE;GTN CPOE;AF;T2DM;POSET
31/10/2013 MYO130555 45 36.25 -0 0 DILTIAZEM;STATIN;LISINOPRIL;ASPIRIN;FRUSEMIDE;SALBUTAMOL;PARACETAMOL;AMOXICILLIN TROP-VE; CP; AF; CTPA-VE; ANT T; INV; RF
23/10/2013 MYO130538 18.75 18.5 0 -1 ASPIRIN;BISOPROLOL;RAMIPRIL AF;MR;QLVFN;FAILED CARDIOVERSION
18/10/2013 MYO134510 39.5 32.25 -0 0 ASPIRIN;STATIN;CO-CODAMOL;BISOPROLOL;GTN;PPI PVD;AF
18/10/2013 MYO130537 19 18 -0 0 STATIN;RAMIPRIL;AMLODIPINE;WARFARIN;(METOPROLOL-STOPPED FOR TEST) TIA;AF;RF+++;ETINAP
The '-' signs have become -0 and '+' have become 0. For clarity, I would like to change these to N and P respectively.
Doing a more test.csv gives
06/11/2013,MYO134537,36.75,29,-0,0,ASPIRIN;BISOPROLOL;LISINOPRIL;METFORMIN;PPI;STATIN;FLUOXETINE;AMLODIPINE;GTN,CPOE;AF;T2DM;POSET,,
31/10/2013,MYO130555,45,36.25,-0,0,DILTIAZEM;STATIN;LISINOPRIL;ASPIRIN;FRUSEMIDE;SALBUTAMOL;PARACETAMOL;AMOXICILLIN,TROP-VE; CP; AF; CTPA-VE; ANT T; INV; RF,,
23/10/2013,MYO130538,18.75,18.5,0,-1,ASPIRIN;BISOPROLOL;RAMIPRIL,AF;MR;QLVFN;FAILED CARDIOVERSION,,
18/10/2013,MYO134510,39.5,32.25,-0,0,ASPIRIN;STATIN;CO-CODAMOL;BISOPROLOL;GTN;PPI,PVD;AF,,
18/10/2013,MYO130537,19,18,-0,0,STATIN;RAMIPRIL;AMLODIPINE;WARFARIN;(METOPROLOL-STOPPED FOR TEST),TIA;AF;RF+++;ETINAP,,
However, when I select the cell in open office the contents of -0 or 0 cells is always 0. This does not allow me to do a search a replace. I do not want to change these manually.
Can I force the + and - through using a psql command or can I use some other linux tool to change the -0 to N and 0 to P. I am using RHEL6.
Try using the decode function in place of the field name.
decode(myx_ecg_posneg,'-','N','+','P')
Update: Sorry, that's pl/sql. Try the case expression:
CASE myx_ecg_posneg
WHEN '-' THEN 'N'
WHEN '+' THEN 'P'
END