Good Afternoon
I am trying to develop a bash script which fetches data from a database and then fills an csv file with said data.
So far i have managed to just that but the way the data is presented is not good: all the data is written in one single cell like so:
and i would like for the data to be presented like this:
Here is my bash script code so far:
#! /bin/bash
currentDate=`date`
mysql -u root -p -D cms -e 'SELECT * from bill' > test_"${currentDate}".csv
Can anyone of you tell me what bash commands i can use to achieve the desired result?
Running the cat command of the file gives the following result:
thank you in advance
Using sed, you can change the delimiter from the output displayed in your image (please use text in the future)
$ sed 's/ \+/,/g' test.csv
If happy with the output, you can then save the file in place.
$ sed -i 's/ \+/,/g' test.csv
You should now have the output in different cells when opened in excel
Data appears to be tab-delimited (cat -T test.csv should show a ^I between each column); I believe excel's default behavior when opening a .csv file is to parse the file based on a comma delimiter.
To override this default behavior and have excel parse the file based on a different delimiter (tab in this case):
open a clean/new worksheet
(menu) DATA -> From Text (file browser should pop up)
select test.csv and hit Import (new pop up asks for details on how to parse)
make sure Delimited radio button is chosen (the default), hit Next >
make sure Tab checkbox is selected (the default), hit Next >
verify the format in the Data preview window (# bottom of pop up) and if ok then hit 'Finish'
Alternatively, save the file as test.txt and upon opening the file with excel you should be prompted with the same pop ups asking for parsing details.
I'm not a big excel user so I'm not sure if there's a way to get excel to automatically parse your files based on tabs (a google/web search will likely provide more help at this point).
Posted this to Reddit yesterday, but no love. I'm on Centos, writing bash scripts and parsing data to import into mysql.
I'm having to convert a story archive that stored the main part of the stories in a plain text file, and need to be able to import these multiple-lined text files into a column in my database. I know I can use mysqlimport, and I have the file designated as a pipe delimited - BUT because the text file I'm importing has carriage returns/line breaks in them, it's importing each paragraph as its own row. So a 9 paragraph text file will import as 9 rows when I use mysqlimport.
Is there a way to do this?
I know the ideal text file for importing (with pipe delimiters) would be like (without the blank line between):
this is my record|12345
another record|24353
have another bagel, why don't you?|43253
However, my file is actually closer to this:
This is the first line of my first paragraph. And now I'm going to do some more line wrapping and stuff.
This is a second line from the same text file that should be treated as a single record along with the first line in a single "blob" or text field. |12345
This is the last stumbling block to recover from a bad piece of software someone dropped in my lap, and I hope this can be done. I have 14,000 of these text files (each in this format), so doing them by hand is kind of out of the question.
Encode / transmit new line as '\n' and same way tab as '\t'. And this is the best practice when you are storing any url or raw text into your database. This will also help you to avoid the sql injection and solve your current problem too...
Please let me know if this helps. Thanks.
I do not know about performance when you converting the lines to sql statements. I think it can be useful:
Input
This is the first line of my first paragraph. And now I'm going to do some more line wrapping and stuff.
This is a second line from the same text file that should be treated as a single record along with the first line in a single "blob" or text field. |12345
I am hoping I understood the question correct.
Everything without a pipe is part of the first field.
And the line with a pipe is for field 1 and 2.
Like this one |12346
Script
my_insert="INSERT INTO my_table
(field1, field2)
VALUES
('"
firstline=0
while read -r line; do
if [[ -z "${line}" ]]; then
printf "\n"
continue;
fi
if [[ "${firstline}" -eq 0 ]]; then
printf "%s" "${my_insert}"
firstline=1
fi
line_no_pipe=${line%|*}
if [[ "${line}" = "${line_no_pipe}" ]]; then
printf "%s\n" "${line}"
else
printf "%s',%s);\n" "${line_no_pipe}" "${line##*|}"
firstline=0
fi
done < input
Output
INSERT INTO my_table
(field1, field2)
VALUES
('This is the first line of my first paragraph. And now I'm going to do some more line wrapping and stuff.
This is a second line from the same text file that should be treated as a single record along with the first line in a single "blob" or text field. ',12345);
INSERT INTO my_table
(field1, field2)
VALUES
('I am hoping I understood the question correct.
Everything without a pipe is part of the first field.
And the line with a pipe is for field 1 and 2.
Like this one ',12346);
I'm not sure if this is a Mysql problem or an SSH problem. However the issue does not happen when using another terminal program such as nano or a bash script.
I have a mysqldump file containing a bunch of lines that look like
INSERT INTO `issues` VALUES (10,'2010-06-21 16:16:08','2010-08-31 11:4...
with a lot of different entries (the lines are really long). I'm trying to paste it into my terminal to insert this data into a remote database, but when I paste, it seems to do it in pieces, inserting the chunks out of order. Here is an example post-mangling:
','May 2011',8,45);','April 2011',7,45),(21,'2011-05-09 09:31:28','2011-05-09 09:31:28','2011-05-12 08:48:16','','March 2011',6,45),(20,'2011-04-07 13:45:14','2011-04-07 13:45:14','2011-04-13 16:00:28','','February 2011',5,45),(19,'2011-03-03 13:36:26','2011-03-03 13:36:26','2011-03-10 08:34:19','','December 2010',4,45),(18,'2011-02-01 13:43:16','2011-02-01 13:43:16','2011-02-15 11:22:09','','November 2010',3,45),(17,'2010-12-07 12:04:53','2010-12-07 12:04:53','2010-12-09 10:00:02','','October 2010',2,45),(16,'2010-11-05 13:04:06','2010-11-05 13:04:06','2010-11-15 11:29:29','','September 2010',1,45),(14,'2010-10-05 08:58:27','2010-10-11 13:28:54','2010-10-12 07:21:20','INSERT INTO `issues` VALUES (10,'2010-06-21 16:16:08','2010-08-31 11:44:18','2010-10-11 12:33:46','\0','June 2010',0,45),(11,'2010-06-24 15:13:00','2010-06-24 15:13:22',NULL,'\0','May 2010',0,45),(12,'2010-08-25 12:47:42','2010-08-31 11:44:02','2010-10-11 12:33:59','
Does anyone know what is causing this issue and a workaround? I've tried assuming it's a display issue and inserting it anyway but that is a no-go. It seems it really is out of order. I'm using 10.6.6 Snow Leopard & Apple's terminal.
Thanks
Had a similar issue with pasting to both mysql and bash over ssh. The culprit was Unicode characters which got interpreted as control characters.
Symptoms: When pasting the text, the insert position would move back to pos 1 several times, seemingly randomly, without starting a new line, and then some new text would be inserted or some already pasted text would be overwritten, resulting in garbled mess.
Cause: It turned out that the text to paste had some unicode dash-like characters in them where dashes should be, and these were interpreted by bash and mysql as kind of control characters to move the insert column position.
So make sure your text does not contain unwanted characters before pasting.
Test: A good test for me has been to open vi (via ssh or not), go into insert mode (press "i") and insert (press Shift+Insert). It will display Unicode characters broken up into Unicode sequences.
Example: My text to paste started like this:
mysqldump –-opt –-no-create-db
Pasting this into bash or mysql via ssh resulted in:
--no-create-dbopt mysqldump
Pasting this into vi made the bad dash-like characters visible, resulting in:
mysqldump �~#~S-opt �~#~S-no-create-db
Means the first dash of every option was the wrong character. I corrected them and now everything worked fine:
mysqldump --opt --no-create-db
I like to do:
ssh hostname mysqldump database | mysql localdatabase
And just do the whole thing in one command.
You can of course add various options to the dump command to skip table drop and creation or other things you don't need.
I have an input file I want to load into a MySQL database, but spread throughout the file are comment lines, which start with !. For example,
!dataset_value_type = count
The other lines that I want to read into the table are normal, without the leading !.
What is the import command to ignore lines that start with !? I just see commands to ONLY take lines that start with something (LINES STARTING BY)
Ouch! I think you will need to pre-process your data file. Something like:
perl -pi.bak -e 's/^!.*$//;' data-file.dat
This CSV file has a field delimiter of $
It looks like this:
14$"ALL0053"$$$"A"$$$"Direct Deposit in FOGSI A/c"$$"DR"$"DAS PRADIP ...
How can I view the file as columns, each field shown as in columns in a table.
I've tried many ways, none work. Any one knows how?
I am using Ubuntu
That's a weird CSV. Since a comma-separated file is usually separated by, well, commas. I think all you need to do is use a simple find/replace available in any text editor.
Open the file in Gnome Edit and look under Edit > Replace...
From there you can specify to replace all $s with ,s
Once your file is a real CSV, you can open it in Open Office Calc (spreadsheet), or really any other spreadsheet program for Ubuntu (GNOME).
cut -d $ -f 1,2,...x filename | sed 's/\$/ /g'
if you only want particular columns, and you don't want to see the $
or
sed 's/\$/ /g' filename
if you just want the $ to be replaced by a space
in ubuntu right-click on the file hit open with.. then OpenOffice Calc. then you should see a dialog box asking for delimiters etc. uncheck comma and and in the "other" field type a $. then hit okay and it will import it for you.
N
As a first attempt:
column -ts'$' path
but this doesn't handle empty fields well, so fix that with this ugly hack:
sed 's/\$\$/$ $/g' path | column -ts$