How to remove blank records returned from osql? - osql

I have a batch script which runs an sql query on multiple databases and appends the results to a .dat file. The script is adding 3 blank lines under the result from each database & also at the top of the file. Im using the below osql command to run the sql query.
osql -e -S %1-n -b -h-1 -w1000 -s","
I need to remove those blank spaces. Is there any osql options that i can use for this?

try to change -w value, like -w4000 or -w8000 to set larger width of columns

Related

How to split string in excel file with bash script?

Good Afternoon
I am trying to develop a bash script which fetches data from a database and then fills an csv file with said data.
So far i have managed to just that but the way the data is presented is not good: all the data is written in one single cell like so:
and i would like for the data to be presented like this:
Here is my bash script code so far:
#! /bin/bash
currentDate=`date`
mysql -u root -p -D cms -e 'SELECT * from bill' > test_"${currentDate}".csv
Can anyone of you tell me what bash commands i can use to achieve the desired result?
Running the cat command of the file gives the following result:
thank you in advance
Using sed, you can change the delimiter from the output displayed in your image (please use text in the future)
$ sed 's/ \+/,/g' test.csv
If happy with the output, you can then save the file in place.
$ sed -i 's/ \+/,/g' test.csv
You should now have the output in different cells when opened in excel
Data appears to be tab-delimited (cat -T test.csv should show a ^I between each column); I believe excel's default behavior when opening a .csv file is to parse the file based on a comma delimiter.
To override this default behavior and have excel parse the file based on a different delimiter (tab in this case):
open a clean/new worksheet
(menu) DATA -> From Text (file browser should pop up)
select test.csv and hit Import (new pop up asks for details on how to parse)
make sure Delimited radio button is chosen (the default), hit Next >
make sure Tab checkbox is selected (the default), hit Next >
verify the format in the Data preview window (# bottom of pop up) and if ok then hit 'Finish'
Alternatively, save the file as test.txt and upon opening the file with excel you should be prompted with the same pop ups asking for parsing details.
I'm not a big excel user so I'm not sure if there's a way to get excel to automatically parse your files based on tabs (a google/web search will likely provide more help at this point).

Export to csv with Dash line and Text to Columns sqlcmd

I am writing sqlcmd in batch file to export SQL result to a csv file. However, I encounter 2 problems in the csv file are there any ways to solve out?
(I am new to batch file and sqlcmd..)
sqlcmd -S Servername -d DBname -U username -P pw -i C:\test\.sql -o "C:\Test\result.csv" -W -w 2000 -s ";"
Have Dash line ----- between the header and data, how to remove the dash line?
The result is now consolidated into the first column for each row, can I make it separated into each column ; delimited in the result (not manually text to columns from excel...)?
1- Nothing here to remove the dashed line. You can use the option -h -1 to avoid the headers altogether, then select the header names inside your query in a separate query on top of your query.
2- Separate the columns with comma "," not ";", since CSV stands for Comma-Separated Values :)

How to insert content of a file in different fields in mysql database using shell script?

I am trying to scan a folder for new files and reading those files and inserting its content into database and then delete file from folder.Till here its working but the issue that the whole content is getting inserted in one field in database.
Below is the code:
inotifywait -m /home/a/b/c -e create -e moved_to |
while read path action file; do
for filename in `ls -1 /home/a/b/c/*.txt`
do
while read line
do
echo $filename $line
mysql -uroot -p -Bse "use datatable; INSERT INTO
table_entries (file,data ) VALUES ('$filename','$line'); "
done <$filename
done
find /home/a/b/c -type f -name "*.txt" -delete
done
Basically the files contains:name,address,contact_no,email.
I want to insert name from file to name field in database,address in address. In php we use explode to split data,what do i use in shell script ?
This would be far easier if you use LOAD DATA INFILE (see the manual for full explanation of syntax and options).
Something like this (though I have not tested it):
inotifywait -m /home/a/b/c -e create -e moved_to |
while read path action file; do
for filename in `ls -1 /home/a/b/c/*.txt`
do
mysql datatable -e "LOAD DATA LOCAL INFILE '$filename'
INTO TABLE table_entries (name, address, contact_no, email)
SET file='$filename'"
done
find /home/a/b/c -type f -name "*.txt" -delete
done
edit: I specified mysql datatable which is like using USE datatable; to set the default database. This should resolve the error about "no database selected."
The columns you list as (name, address, contact_no, email) name the columns in the table, and they must match the columns in the input file.
If you have another column in your table that you want to set, but not from data in the input file, you use the extra clause SET file='$filename'.
You should also use some error checking to make sure the import was successful before you delete your *.txt files.
Note that LOAD DATA INFILE assumes lines end in newline (\n), and fields are separated by tab (\t). If your text file uses commas or some other separator, you can add syntax to the LOAD DATA INFILE statement to customize how it reads your file. The documentation shows how to do this, with many examples: https://dev.mysql.com/doc/refman/5.7/en/load-data.html I recommend you spend some time and read it. It's really not very long.

making the output file from db2 displays the process of table creations from shell scipt?

I created a shell script such that will create a string that contain the process of table creation for db2 . As in Example:
string=" db2 "CREATE TABLE foo (......... ""
Now my script will connect to the database and input the string which translate to db2 that will create a table .Before shell inputs the string , I enabled on db2 the command
db2 update command options using z on test-database.txt
so that I want to save all the outputs on textfile
However, my problem is I want to for that string to show in the output file created by db2 just like when you are typing in db2 to create a table, but in never shows in the output file. It rather will show the result whether table successfully created or not in test-database.txt , e.g
The SQL command completed successfully.
Is there a way to make the output file show the creation of table ? . Thanks in advance
You are talking about the options for the db2clp, which has many different options.
If I understood, you are writing a script (a bash script, I think so) and you want to retrieve the command output. For this, you have two options
Write the command output into a file, and then read the file.
Redirect the command output to a varaible.
The first option is the easier one. This option uses the z option, that writes the whole output to a file. You can change this behaviour just by printing out what you want, and then redirecting the output to a file.
db2 -tf myfile.sql -z /tmp/output
VAR=$(cat /tmp/output)
The second option is a little tricky, because redirection implies the creation of another shell, and then you should reload the db2 profile. This option uses the v option, that is the standard output, and I hope the output is what you want to have.
VAR=$(. ~db2inst1/sqllib/db2profile ; db2 -tvf myfile.sql)
Finally, you just need to process the content of VAR, via awk, sed, grep, etc.
For more information: http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.admin.cmd.doc/doc/r0010410.html

Manipulating giant MySQL dump files

What's the easiest way to get the data for a single table, delete a single table or break up the whole dump file into files each containing individual tables? I usually end up doing a lot of vi regex munging, but I bet there are easier ways to do these things with awk/perl, etc. The first page of Google results brings back a bunch of non-working perl scripts.
When I need to pull a single table from an sql dump, I use a combination of grep, head and tail.
Eg:
grep -n "CREATE TABLE" dump.sql
This then gives you the line numbers for each one, so if your table is on line 200 and the one after is on line 269, I do:
head -n 268 dump.sql > tophalf.sql
tail -n 69 tophalf.sql > yourtable.sql
I would imagine you could extend upon those principles to knock up a script that would split the whole thing down into one file per table.
Anyone want a go doing it here?
Another bit that might help start a bash loop going:
grep -n "CREATE TABLE " dump.sql | tr ':`(' ' ' | awk '{print $1, $4}'
That gives you a nice list of line numbers and table names like:
200 FooTable
269 BarTable
Save yourself a lot of hassle and use mysqldump -T if you can.
From the documentation:
--tab=path, -T path
Produce tab-separated data files. For each dumped table, mysqldump
creates a tbl_name.sql file that contains the CREATE TABLE statement
that creates the table, and a tbl_name.txt file that contains its
data. The option value is the directory in which to write the files.
By default, the .txt data files are formatted using tab characters
between column values and a newline at the end of each line. The
format can be specified explicitly using the --fields-xxx and
--lines-terminated-by options.
Note This option should be used only when mysqldump is run on the
same machine as the mysqld server. You must have the FILE privilege,
and the server must have permission to write files in the directory
that you specify.
This shell script will grab the tables you want and pass them to splitted.sql.
It’s capable of understanding regular expressions as I’ve added a sed -r option.
Also MyDumpSplitter can split the dump into individual table dumps.
Maatkit seems quite appropriate for this with mk-parallel-dump and mk-parallel-restore.
I am a bit late on that one, but if it can help anyone, I had to split a huge SQL dump file in order to import the data to another Mysql server.
what I ended up doing was splitting the dump file using the system command.
split -l 1000 import.sql splited_file
The above will split the sql file every 1000 lines.
Hope this helps someone