How to split string in excel file with bash script? - mysql

Good Afternoon
I am trying to develop a bash script which fetches data from a database and then fills an csv file with said data.
So far i have managed to just that but the way the data is presented is not good: all the data is written in one single cell like so:
and i would like for the data to be presented like this:
Here is my bash script code so far:
#! /bin/bash
currentDate=`date`
mysql -u root -p -D cms -e 'SELECT * from bill' > test_"${currentDate}".csv
Can anyone of you tell me what bash commands i can use to achieve the desired result?
Running the cat command of the file gives the following result:
thank you in advance

Using sed, you can change the delimiter from the output displayed in your image (please use text in the future)
$ sed 's/ \+/,/g' test.csv
If happy with the output, you can then save the file in place.
$ sed -i 's/ \+/,/g' test.csv
You should now have the output in different cells when opened in excel

Data appears to be tab-delimited (cat -T test.csv should show a ^I between each column); I believe excel's default behavior when opening a .csv file is to parse the file based on a comma delimiter.
To override this default behavior and have excel parse the file based on a different delimiter (tab in this case):
open a clean/new worksheet
(menu) DATA -> From Text (file browser should pop up)
select test.csv and hit Import (new pop up asks for details on how to parse)
make sure Delimited radio button is chosen (the default), hit Next >
make sure Tab checkbox is selected (the default), hit Next >
verify the format in the Data preview window (# bottom of pop up) and if ok then hit 'Finish'
Alternatively, save the file as test.txt and upon opening the file with excel you should be prompted with the same pop ups asking for parsing details.
I'm not a big excel user so I'm not sure if there's a way to get excel to automatically parse your files based on tabs (a google/web search will likely provide more help at this point).

Related

Extract data from HTML table and put it in a text file with shell

I need a shell script to get a public password for VPN from a site (which refreshes the password everyday more or less). The password is a HTML table, in a specific line of the HTML code of the web page. Once that I've retrieved the password (a word made of 5 characters) I'd like to put it at the end of a simple text file. I'd need a script like this to automatically update the password in my OpenWrt-based router's OpenVPN client.
This is the webpage I'm talking about, and this is line number 265, where the password is (there are two instances of the password, doesn't matter which one the script chooses:
<td>1<td>in1.vpnjantit.com<td>53,992,1194,25000<td><a href='http://www.vpnjantit.com/assets/in1.vpnjantit.com.zip'>in1.vpnjantit.com.zip</a><td>vpnjantit.com<td>x3bu7<td>2018-03-31 at 22:00<tr><tr><td>2<td>in2.vpnjantit.com<td>53,443,1194,25000<td><a href='http://www.vpnjantit.com/assets/in2.vpnjantit.com.zip'>in2.vpnjantit.com.zip</a><td>vpnjantit.com<td>x3bu7<td>2018-03-31 at 22:00<tr></table></div>
The file where I want to put the password it will be very simple:
vpnjantit.com
passwd
The first line is the username, and it will always be the same: "vpnjantit.com". The second line is the 5 characters password. I'd need that the script first deletes the second line of the file, and then it puts the password from the html file on the second line (replace the old password with the new one).
I looked around, and tried to do something with a sequency of awk, curl, cat and other commands, but I wasn't able to get the desired result. Really have no idea about how to realize this.
Thank you a lot in advance for any advice!
I've used nokogiri, though there are other tools.
echo vpnjantit.com > file.txt # first line
curl http://www.vpnjantit.com/free-openvpn-india.html | nokogiri -e 'puts $_.at_css("table > tr > td:nth-child(6)").text >> file.txt # second line
This would replace the file outright (delete it and create a new one).
Please note that this could break anytime with even minor format changes.

xls2csv not converting a single worksheet to csv

I am trying to convert an xls to csv file. In my workbook I have a sheet named networktab I need just one sheet out of the entire workbook to be dumped into csv. So I used the following command.
xls2csv -b networktab london02.xls > london02.csv
But this still dumps all the work sheets into csv. I am not sure what is missing here. I have used https://www.maketecheasier.com/convert-xls-file-to-csv-in-command-line/ utility.
It looks like the you're passing is wrong option to the command
xls2csv -n networktab london02.xls > london02.csv
Options available for xls2csv are:
-b : the character set the source spreadsheet is in (before)
-n : specify the worksheet number/name to convert (you cannot use this option with -w)
For more info: Check here

making the output file from db2 displays the process of table creations from shell scipt?

I created a shell script such that will create a string that contain the process of table creation for db2 . As in Example:
string=" db2 "CREATE TABLE foo (......... ""
Now my script will connect to the database and input the string which translate to db2 that will create a table .Before shell inputs the string , I enabled on db2 the command
db2 update command options using z on test-database.txt
so that I want to save all the outputs on textfile
However, my problem is I want to for that string to show in the output file created by db2 just like when you are typing in db2 to create a table, but in never shows in the output file. It rather will show the result whether table successfully created or not in test-database.txt , e.g
The SQL command completed successfully.
Is there a way to make the output file show the creation of table ? . Thanks in advance
You are talking about the options for the db2clp, which has many different options.
If I understood, you are writing a script (a bash script, I think so) and you want to retrieve the command output. For this, you have two options
Write the command output into a file, and then read the file.
Redirect the command output to a varaible.
The first option is the easier one. This option uses the z option, that writes the whole output to a file. You can change this behaviour just by printing out what you want, and then redirecting the output to a file.
db2 -tf myfile.sql -z /tmp/output
VAR=$(cat /tmp/output)
The second option is a little tricky, because redirection implies the creation of another shell, and then you should reload the db2 profile. This option uses the v option, that is the standard output, and I hope the output is what you want to have.
VAR=$(. ~db2inst1/sqllib/db2profile ; db2 -tvf myfile.sql)
Finally, you just need to process the content of VAR, via awk, sed, grep, etc.
For more information: http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.admin.cmd.doc/doc/r0010410.html

Export a MYSQL column to a plain txt file with no headings

So what I'm trying to do is write a script or CRON job (Linux- CentOS) to export the usernames listed in my wordpress database to a simple .txt file with just on username per line. So with the picture, I would like the .txt file to read like this:
Sir_Fluffulus
NunjaX007
(Except with all the username in the user_login column.)
See screenshot at:
I have found how to export the entire table to a CVS file, but that contains about 10+ fields (Columns) that I DO NOT what to show up in this text file.
Can anyone point me in the right direction on how to do this?
If it helps, this is going to be for exporting users that have signed up on our website (Wordpress) to a whitelist.txt file for Minecraft. Thanks!
Pass a query into the mysql tool, and use silent mode.
$ mysql -u username dbname -s <<< 'SELECT fieldname FROM tablename'
Sir_Fluffulus
NunjaX007

CSV field delimiter problem

This CSV file has a field delimiter of $
It looks like this:
14$"ALL0053"$$$"A"$$$"Direct Deposit in FOGSI A/c"$$"DR"$"DAS PRADIP ...
How can I view the file as columns, each field shown as in columns in a table.
I've tried many ways, none work. Any one knows how?
I am using Ubuntu
That's a weird CSV. Since a comma-separated file is usually separated by, well, commas. I think all you need to do is use a simple find/replace available in any text editor.
Open the file in Gnome Edit and look under Edit > Replace...
From there you can specify to replace all $s with ,s
Once your file is a real CSV, you can open it in Open Office Calc (spreadsheet), or really any other spreadsheet program for Ubuntu (GNOME).
cut -d $ -f 1,2,...x filename | sed 's/\$/ /g'
if you only want particular columns, and you don't want to see the $
or
sed 's/\$/ /g' filename
if you just want the $ to be replaced by a space
in ubuntu right-click on the file hit open with.. then OpenOffice Calc. then you should see a dialog box asking for delimiters etc. uncheck comma and and in the "other" field type a $. then hit okay and it will import it for you.
N
As a first attempt:
column -ts'$' path
but this doesn't handle empty fields well, so fix that with this ugly hack:
sed 's/\$\$/$ $/g' path | column -ts$