Mysql table formatting with Ruby mysql gem - mysql

Mysql by default prints table results in mysql table formatting
+----+----------+-------------+
| id | name | is_override |
+----+----------+-------------+
| 1 | Combined | 0 |
| 2 | Standard | 0 |
+----+----------+-------------+
When calling mysql from the unix shell this table formatting is not preserved, but it's easy to request it via the -t option
mysql -t my_schema < my_query_file.sql
When using Ruby, I'm using the mysql gem to return results. Since the gem returns data as hashes, there's no option to preserve table formatting. However, is there any way I can easily print a hash/data with that formatting? Without having to calculate spacing and such?
db = Mysql.new(my_database, my_username, my_password, my_schema)
result = db.query("select * from my_table")
result.each_hash { |h|
# Print row. Any way to print it with formatting here?
puts h
}

Some gems and codes:
https://rubygems.org/gems/datagrid
http://rubygems.org/gems/text-table
https://github.com/visionmedia/terminal-table
https://github.com/geemus/formatador
https://github.com/wbailey/command_line_reporter
https://github.com/arches/table_print
http://johnallen.us/?p=347
I have not tried any of them.

Related

MySQL query splitting unexpectedly in Bash Script

I am having issues and not understanding what the underlying cause is.
I have a table that has these three fields:
______________________________________________
| cid | order_id | TxRefNum |
----------------------------------------------
I am making a simple call in my bash script (there is literally no other code to start with)
#!/bin/bash
mysql --login-path=main-data -e "SELECT
cid,
order_id,
TxRefNum
FROM database.orders_temp" |
while read this that other; do
echo "$this || $that || $other"
done
I would expect to see the following:
__________________________________________________________
| 29 | F0VIc - CHATEAU ROOFIN | 5555555 |
----------------------------------------------------------
Instead my script is splitting the string $that into two different strings .. The echo is actually:
___________________________________________________
| 29 | F0VIc | - CHATEAU ROOFIN |
---------------------------------------------------
Do I have to set a delimiter when setting my variables in my while loop?? I am truly stumped!!
Getting output from the mysql command formatted in an intelligent way is problematic. In your case bash is interpreting the as a delimiter. You need to split a different way. I was able to get this working. You'll note the | in the query as well at the IFS line at the tope
#!/bin/bash
IFS='|' # set the delimiter
mysql --login-path=main-data -e "SELECT
29 as cid, '|',
'F0VIc - CHATEAU ROOFIN' as order_id,
'|',
5555555 as TxRefNum
FROM dual" |
while read this that other; do
echo "$this || $that || $other"
done

How to investigate MySQL errors

Below is a query I found to display my errors and warnings in MySQL:
SELECT
`DIGEST_TEXT` AS `query`,
`SCHEMA_NAME` AS `db`,
`COUNT_STAR` AS `exec_count`,
`SUM_ERRORS` AS `errors`,
(ifnull((`SUM_ERRORS` / nullif(`COUNT_STAR`,0)),0) * 100) AS `error_pct`,
`SUM_WARNINGS` AS `warnings`,
(ifnull((`SUM_WARNINGS` / nullif(`COUNT_STAR`,0)),0) * 100) AS `warning_pct`,
`FIRST_SEEN` AS `first_seen`,
`LAST_SEEN` AS `last_seen`,
`DIGEST` AS `digest`
FROM
performance_schema.events_statements_summary_by_digest
WHERE
((`SUM_ERRORS` > 0) OR (`SUM_WARNINGS` > 0))
ORDER BY
`SUM_ERRORS` DESC,
`SUM_WARNINGS` DESC;
Is there some way to drill down into performance_schema to find the exact error message that is associated with the errors or warnings above?
I was also curious what it means if the db column or query column shows up as NULL. Below are a few specific examples of what I'm talking about
+---------------------+--------+------------+--------+----------+--------+
| query | db | exec_count | errors | warnings | digest |
+---------------------+--------+------------+--------+----------+--------+
| SHOW MASTER LOGS | NULL | 192 | 192 | 0 | ... |
+---------------------+--------+------------+--------+----------+--------+
| NULL | NULL | 553477 | 64 | 18783 | NULL |
+---------------------+--------+------------+--------+----------+--------+
| SELECT COUNT ( * ) | NULL | 48 | 47 | 0 | ... |
|FROM `mysql` . `user`| | | | | |
+---------------------+--------+------------+--------+----------+--------+
I am also open to using a different query that someone may have to display these errors/warnings
The message will be in performance_schema.events_statements_history.message_text column. You do need to make sure that performance_schema_events_statements_history_size config variable is set to a positive and sufficiently large value, and that the history collection is enabled. To enable the history collection, run:
update performance_schema.setup_consumers set enabled='YES'
where name='events_statements_history';
To check if it is enabled:
select * from performance_schema.setup_consumers where
name='events_statements_history';
NULL value of db means the there was no active database selected. Note that the active database does not have to be the same as the database of the table involved in a query. The active database is used as default when it is not explicitly specified in a query.
This will only give you error messages, not warning messages. From a brief look at the code it appears that the warning text does not get logged anywhere - which is understandable given that one statement could produce millions of them. So you have a couple of options:
Extract the statement from events_statements_history.sql_text, re-execute it, and then run SHOW WARNINGS
Extract the statement, track it down in your application code, then instrument your code to log the output of SHOW WARNINGS in hopes of catching it live if manual execution fails to reproduce the warnings

Read MySQL result from shell script

at first, I am a very new on shell scripting, so please don't shoot me !! :)
What I try to do. I have a multi-site WordPress installation, and I like to write a script that will be able to export specific tables from the schema either by passing the site id as argument in shell script, or by set an option to export all selected the tables of the schema.
The WordPress, in order to recognize which table set is for which site, changes the prefix of each table set. So In example does the following :
wp_options
wp_1_options
...
wp_x_options
In addition, the WordPress store the blog id in a special table called wp_blogs
So, from my shell script I run the following code :
mysql -uUSER -pPASS -e 'SELECT `blog_id` AS `ID`, `path` AS `Slug` FROM `wp`.`wp_blogs`'
and I am getting the following results
+----+---------------------------+
| ID | Slug |
+----+---------------------------+
| 1 | / |
| 2 | /site-2-slug/ |
| 4 | /site-4-slug/ |
| 5 | /site-5-slug/ |
| 6 | /site-6-slug/ |
| 7 | /site-7-slug/ |
| 8 | /site-8-slug/ |
| 9 | /site-9-slug/ |
| 10 | /site-10-slug/ |
+----+---------------------------+
So, now the actual question is, how can I parse the MySql result line by line, in order to get the ID and the Slug information ?
Side note 1 : The whole script has been generated and run's somehow manually. I need now this information in order to automate even farther the exporting script.
*Side note 2 : The MySql executed via the Vagrant ssh like the following line : *
sudo vagrant ssh --command "mysql -uroot -proot -e 'SELECT blog_id FROM wp.wp_blogs'"
You could save the result in a file using INTO like below:
SELECT blog_id, path FROM wp.wp_blogs
INTO OUTFILE '/tmp/blogs.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
And then you could process it line by line either usingsed/awk/simple while loop. Say you want to search for site and replace it with mysite, you could do something like:
awk -F',' '{print "Id: " $1 ", path: "$2}' /tmp/blogs.csv ##or simply cat the file.

pdi spoon ms-access concat

Suppose I have this table named table1:
| f1| f2 |
--------------
| 1 | str1 |
| 1 | str2 |
| 2 | str3 |
| 3 | str4 |
| 3 | str5 |
I wanted to do something like:
Select f1, group_concat(f2) from table1
this is in mysql, I am working with ms-access! And get the result:
| 1 | str1,str2|
| 2 | str3 |
| 3 | str4,str5|
So I searched for a function in ms-access that would do the same and found it! xD
The problem is that everyday I have to download some database in ms-access, create the function to concat there, and then create a new table with those concated values.
I wanted to incorporate that process in the Pentaho Data Integration spoon transformations, that I use after all this work.
So what I want is a way to define a ms-access function in the PDI spoon, or some way to combine steps that would emulate the group_concat from mysql.
Simple - Query from access, and use the "group by" step to do your group_concat - there is an option to concatenate fields separated by , or any string of your choice.
Dont forget that the stream must be sorted by whatever you're grouping by unless you use the memory group by step.
A simple way is you move your data in ms-access to mysql with the same structure (mysql DB structure = ms-access DB structure), then execute your "Select f1, group_concat(f2) from table1". For details follow this below steps :
Create transformation A to move/transfer your ms-access data to mysql
Create transformation B to execute Select f1, group_concat(f2) from table1
Create job to execute transformation A and B (You must execute tranformation A before B)

How to batch load a CSV columns into a MySQL table

I have numerous csv files that will form the basis of a mysql database. My problem is as follows:
The input CSV files are of the format:
TIME | VALUE PARAM 1 | VALUE PARAM 2 | VALUE PARAM 3 | ETC.
0.00001 | 10 | 20 | 30 | etc.
This is not the structure I want to use in the database. There I would like one big table for all of the data, structured something like:
TIME | PARAMETER | VALUE | Unit of Measure | Version
This means that I would like to insert the combination of TIME and VALUE PARAM 1 from the CSV into the table, then the combination of TIME and VALUE PARAM 2, and so on, and so on.
I haven't done anything like this before, but could a possible solution be to set up a BASH script that loops through the columns and on each iteration inserts the combination of time + value into my database?
I have a reasonable understanding of mysql, but very limited knowledge of bash scripting. But I couldn't find a way out with the mysql LOAD DATA INFILE command.
If you need more info to help me out, I'm happy to provide more info!
Regards,
Erik
i do this all day, every day, and as a rule, have the most success with the least headaches by using LOAD DATA INFILE to a temporary table, then leveraging the power of mySQL to get it into the final table/format successfully. Details at this answer.
To illustrate this further, we process log files for every video event of 80K highschools/colleges around the country (that's every pause/play/seek/stop/start for 100's of thousands of videos).
They're served from a number of different servers, depending on the type of videos (WMV, FLV, MP4, etc.), so there's some 200GB to handle every night, with each format having a different log layout. The old way we did it with CSV/PHP took literally days to finish, but changing it to LOAD DATA INFILE into temporary tables, unifying them into a second, standardized temporary table, then using SQL to group and otherwise slice and dice cut the execution time to a few hours.
It would probably be easiest to preprocess your CSV with an awk script first, and then (as Greg P said) use LOAD DATA LOCAL INFILE. If I understand your requirements correctly, this awk script should work:
#!/usr/bin/awk -F| -f
NR==1 {
for(col = 2; col <= NF; col++) label[col] = $col
printf("TIME | PARAM | VALUE | UNIT | VERSION\n")
next
}
{
for(col = 2; col <= NF; col++) {
printf("%s | %s | %s | [unit] | [version]\n", $1, label[col], $col)
}
}
Output:
$ ./test.awk test.in
TIME | PARAM | VALUE | UNIT | VERSION
0.00001 | VALUE PARAM 1 | 10 | [unit] | [version]
0.00001 | VALUE PARAM 2 | 20 | [unit] | [version]
0.00001 | VALUE PARAM 3 | 30 | [unit] | [version]
0.00001 | ETC. | etc. | [unit] | [version]
Then
mysql> LOAD DATA LOCAL INFILE 'processed.csv'
mysql> INTO TABLE 'table'
mysql> FIELDS TERMINATED BY '|'
mysql> IGNORE 1 LINES;
(Note: I haven't tested the MySQL)