Export to csv with Dash line and Text to Columns sqlcmd - sqlcmd

I am writing sqlcmd in batch file to export SQL result to a csv file. However, I encounter 2 problems in the csv file are there any ways to solve out?
(I am new to batch file and sqlcmd..)
sqlcmd -S Servername -d DBname -U username -P pw -i C:\test\.sql -o "C:\Test\result.csv" -W -w 2000 -s ";"
Have Dash line ----- between the header and data, how to remove the dash line?
The result is now consolidated into the first column for each row, can I make it separated into each column ; delimited in the result (not manually text to columns from excel...)?

1- Nothing here to remove the dashed line. You can use the option -h -1 to avoid the headers altogether, then select the header names inside your query in a separate query on top of your query.
2- Separate the columns with comma "," not ";", since CSV stands for Comma-Separated Values :)

Related

Cassandra CQLSH COPY FROM CSV: Can I create my own colum from others

I often use the cqlsh command COPY...FROM CSV... but I have new needs.
I'd like to add an extra colum in my cassandra table that would be created from two other columns.
Example (cvs file)
1;2
2;4
3;6
would become a table with these values:
my table: 12;1;2
24;2;4
36;3;6
I ve used other options but they're much slower than COPY...FROM CSV
Do you know if I can do that using COPY...FROM CSV?
You can't do this with only copy command.
If you are using Linux then
First dumb the csv to file with copy command let's say csv_test.csv
1;2
2;4
3;6
Then use the below command to combine first two column into one.
cat csv_test.csv | awk -F ";" '{print $1$2 ";" $0}' > csv_test_combine.csv
Output file csv_test_combine.csv :
12;1;2
24;2;4
36;3;6

How to remove blank records returned from osql?

I have a batch script which runs an sql query on multiple databases and appends the results to a .dat file. The script is adding 3 blank lines under the result from each database & also at the top of the file. Im using the below osql command to run the sql query.
osql -e -S %1-n -b -h-1 -w1000 -s","
I need to remove those blank spaces. Is there any osql options that i can use for this?
try to change -w value, like -w4000 or -w8000 to set larger width of columns

Change output format for MySQL command line results to CSV

I want to get headerless CSV data from the output of a query to MySQL on the command line. I'm running this query on a different machine from the MySQL server, so all those Google answers with "INTO OUTFILE" are no good.
So I run mysql -e "select people, places from things". That outputs stuff that looks kinda like this:
+--------+-------------+
| people | places |
+--------+-------------+
| Bill | Raleigh, NC |
+--------+-------------+
Well, that's no good. But hey, look! If I just pipe it to anything, it turns it into a tab-separated list:
people places
Bill Raleigh, NC
That's better- at least it's programmatically parseable. But I don't want TSV, I want CSV, and I don't want that header. I can get rid of the header with mysql <stuff> | tail -n +2, but that's a bother I'd like to avoid if MySQL just has a flag to omit it. And I can't just replace all tabs with commas, because that doesn't handle content with commas in it.
So, how can I get MySQL to omit the header and give me data in CSV format?
As a partial answer: mysql -N -B -e "select people, places from things"
-N tells it not to print column headers. -B is "batch mode", and uses tabs to separate fields.
If tab separated values won't suffice, see this Stackoverflow Q&A.
The above solutions only work in special cases. You'll get yourself into all kinds of trouble with embedded commas, embedded quotes, other things that make CSV hard in the general case.
Do yourself a favor and use a general solution - do it right and you'll never have to think about it again. One very strong solution is the csvkit command line utilities - available for all operating systems via Python. Install via pip install csvkit. This will give you correct CSV data:
mysql -e "select people, places from things" | csvcut -t
That produces comma-separated data with the header still in place. To drop the header row:
mysql -e "select people, places from things" | csvcut -t | tail -n +2
That produces what the OP requested.
I wound up writing my own command-line tool to take care of this. It's similar to cut, except it knows what to do with quoted fields, etc. This tool, paired with #Jimothy's answer, allows me to get a headerless CSV from a remote MySQL server I have no filesystem access to onto my local machine with this command:
$ mysql -N -e "select people, places from things" | csvm -i '\t' -o ','
Bill,"Raleigh, NC"
csvmaster on github
It is how to save results to CSV on the client-side without additional non-standard tools.
This example uses only mysql client and awk.
One-line:
mysql --skip-column-names --batch -e 'select * from dump3' t | awk -F'\t' '{ sep=""; for(i = 1; i <= NF; i++) { gsub(/\\t/,"\t",$i); gsub(/\\n/,"\n",$i); gsub(/\\\\/,"\\",$i); gsub(/"/,"\"\"",$i); printf sep"\""$i"\""; sep=","; if(i==NF){printf"\n"}}}'
Logical explanation of what is needed to do
First, let see how data looks like in RAW mode (with --raw option). the database and table are respectively t and dump3
You can see the field starting from "new line" (in the first row) is splitted into three lines due to new lines placed in the value.
mysql --skip-column-names --batch --raw -e 'select * from dump3' t
one line 2 new line
quotation marks " backslash \ two quotation marks "" two backslashes \\ two tabs new line
the end of field
another line 1 another line description without any special chars
OUTPUT data in batch mode (without --raw option) - each record changed to the one-line texts by escaping characters like \ <tab> and new-lines
mysql --skip-column-names --batch -e 'select * from dump3' t
one line 2 new line\nquotation marks " backslash \\ two quotation marks "" two backslashes \\\\ two tabs\t\tnew line\nthe end of field
another line 1 another line description without any special chars
And data output in CSV format
The clue is to save data in CSV format with escaped characters.
The way to do that is to convert special entities which mysql --batch produces (\t as tabs \\ as backshlash and \n as newline) into equivalent bytes for each value (field).
Then whole value is escaped by " and enclosed also by ".
Btw - using the same characters for escaping and enclosing gently simplifies output and processing, because you don't have two special characters.
For this reason all you have to do with values (from csv format perspective) is to change " to "" whithin values. In more common way (with escaping and enclosing respectively \ and ") you would have to first change \ to \\ and then change " into \".
And the commands' explanation step by step:
# we produce one-line output as showed in step 2.
mysql --skip-column-names --batch -e 'select * from dump3' t
# set fields separator to because mysql produces in that way
| awk -F'\t'
# this start iterating every line/record from the mysql data - standard behaviour of awk
'{
# field separator is empty because we don't print a separator before the first output field
sep="";
-- iterating by every field and converting the field to csv proper value
for(i = 1; i <= NF; i++) {
-- note: \\ two shlashes below mean \ for awk because they're escaped
-- changing \t into byte corresponding to <tab>
gsub(/\\t/, "\t",$i);
-- changing \n into byte corresponding to new line
gsub(/\\n/, "\n",$i);
-- changing two \\ into one \
gsub(/\\\\/,"\\",$i);
-- changing value into CSV proper one literally - change " into ""
gsub(/"/, "\"\"",$i);
-- print output field enclosed by " and adding separator before
printf sep"\""$i"\"";
-- separator is set after first field is processed - because earlier we don't need it
sep=",";
-- adding new line after the last field processed - so this indicates csv record separator
if(i==NF) {printf"\n"}
}
}'
How about using sed? It comes standard with most (all?) Linux OS.
sed 's/\t/<your_field_delimiter>/g'.
This example uses GNU sed (Linux). For POSIX sed (AIX/Solaris)I believe you would type a literal TAB instead of \t
Example (for CSV output):
#mysql mysql -B -e "select * from user" | while read; do sed 's/\t/,/g'; done
localhost,root,,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,,,,,0,0,0,0,,
localhost,bill,*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,,,,,0,0,0,0,,
127.0.0.1,root,,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,,,,,0,0,0,0,,
::1,root,,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,Y,,,,,0,0,0,0,,
%,jim,*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,,,,,0,0,0,0,,
mysqldump utility can help you, basically with --tab option it's a wrapped for SELECT INTO OUTFILE statement.
Example:
mysqldump -u root -p --tab=/tmp world Country --fields-enclosed-by='"' --fields-terminated-by="," --lines-terminated-by="\n" --no-create-info
This will create csv formatted file /tmp/Country.txt
If you are using mysql client you can set up the resultFormat per session e.g.
mysql -h localhost -u root --resutl-format=json
or
mysql -h localhost -u root --vertical
Check out the full list of arguments here.
mysql client can detect the output fd, if the fd is S_IFIFO(pipe) then don't output ASCII TABLES, if the fd is character device(S_IFCHR) then output ASCII TABLES.
you can use --table to force output the ASCII TABLES like:
$mysql -t -N -h127.0.0.1 -e "select id from sbtest1 limit 1" | cat
+--------+
| 100024 |
+--------+
-t, --table Output in table format.
You can use spyql to read the tab-delimited output of mysql and generate a comma-delimited CSV and turn off header writing:
$ mysql -e "SELECT 'Bill' AS people, 'Raleigh, NC' AS places" | spyql -Oheader=False "SELECT * FROM csv TO csv"
Bill,"Raleigh, NC"
spyql detects if the input has a header and what is the delimiter. The output delimiter is the comma by default. You can specify all these options manually if you wish:
$ mysql -e "SELECT 'Bill' AS people, 'Raleigh, NC' AS places" | spyql -Idelimiter="'\t'" -Iheader=True -Odelimiter="," -Oheader=False "SELECT * FROM csv TO csv"
Bill,"Raleigh, NC"
I would not turn off header writing on mysql because spyql can take advantage of it, for example, if you choose to generate JSONs instead of CSVs:
$ mysql -e "SELECT 'Bill' AS people, 'Raleigh, NC' AS places" | spyql "SELECT * FROM csv TO json"
{"people": "Bill", "places": "Raleigh, NC"}
or if you need to reference your columns:
$ mysql -e "SELECT 'Bill' AS people, 'Raleigh, NC' AS places" | spyql -Oindent=2 "SELECT *, 'I am {} and I live in {}.'.format(people, places) AS message FROM csv TO json"
{
"people": "Bill",
"places": "Raleigh, NC",
"message": "I am Bill and I live in Raleigh, NC."
}
Disclaimer: I am the author of spyql

bcp: Error = [Microsoft][SQL Server Native Client 10.0]String data, right truncation

I have recently encountered an error while working with bcp.
Here is the error.
SQLState = 22001, NativeError = 0 Error = [Microsoft][SQL Server
Native Client 10.0]String data, right truncation
I'm trying to unpack the data into a staging table which does not have any constraints and the datatypes are also fairly large when compared to the data. I have about 11 files from different tables being bcp'd and zipped out of which only one file when unpacking errors out.
This is the command which I have been using succesfully. Very recently(when trying to make a copy of the current WH and settign up the process) I have been facing issues.
bcp.exe employee_details in employee_details.dat -n -E -S "servername"
-U sa -P "Password"
I have tried changing the commands to -C -T -S which worked when I gave the format manually. This is a very big and important packet I need to load in to my WH.
I don't know if I see a format file here or not.
Any help is needed.
Thanks
Cinnamon girl.
We also faced same issue while doing BCP and it turned out to be an issue with new line character in .dat file.
View the file in Notepad++ and click on "Show All Characters" to see the new line character.
BCP throws following error with -r "\r\n" option i.e. with below command
bcp dbo.Test in C:\Test.dat -c -t "|" -r "\r\n" -S "DBServerName" -T -E
" SQLState = 22001, NativeError = 0 Error = [Microsoft][SQL Server
Native Client 10.0]String data, right truncation "
BCP treat all rows in file as a single row with -r "\n" or -r "\r" option i.e. with below command
bcp dbo.Test in C:\Test.dat -c -t "|" -r "\n" -S "DBServerName" -T -E
Issue was resolved when we used the Haxadecimal value (0x0a) for New Line character in BCP command
bcp dbo.Test in C:\Test.dat -c -t "|" -r "0x0a" -S "DBServerName" -T -E
bcp right truncation error occurs when there is too much data that can be fitted into a single column.
This can be caused by improper format files(if any being used) or delimiter.
The line terminator (Windows has CRLF or '\r\n' and UNIX has '\n') can also cause this error. Example Your format file contains Windows CRLF ie, '\r\n' as the row terminator but the file contains '\n' as line endings. This would mean fitting the whole file in 1 row(rather 1 column) which leads to right truncation error.
I was also getting the truncation message. After hours of searching forums and trying suggested solutions I finally got my load to work.
The reason for the truncation message was because I was gullible enough to think that putting the column name in the format file actually mattered. It's the preceding numeric that appears to dictate where the data gets loaded.
My input file did not have data for the third column in the table. So this is how my format file looked.
... "," 1 Cust_Name SQL_Latin1...
... "," 2 Cust_Ref SQL_Latin1...
... "," 3 Cust_Amount SQL_Latin1...
... "\r\n" 4 Cust_notes SQL_Latin1...
My input file looked like this:
Jones,ABC123,200.67,New phone
Smith,XYZ564,10.23,New SIM
The table looked like
Cust_Name Varchar(20)
Cust_Ref Varchar(10)
Cust_Type Varchar(3)
Cust_amount Decimal(10,2)
Cust_Notes Varchar (50)
Cust_Tel Varchar(15)
Cust......
I'd assumed by giving the column name in the format file that the data would go into the appropriate column on the table.
This however works as the column number is important and the column name is noise.
... "," 1 A SQL_Latin1...
... "," 2 B SQL_Latin1...
... "," 4 C SQL_Latin1...
... "\r\n" 5 D SQL_Latin1...
For us it turned out that the file we were trying to upload was in Unicode instead of ANSI format.
There is a -N switch, but our tables didn't have any NVARCHAR data.
We just saved the file in ANSI format and it worked, but if you have NVARCHAR data or you may need to use the -N switch
See TechNet - Using Unicode Native Format to Import or Export Data
I know this is old - but I just came across an instance where I was getting this error, turns out one of my numeric fields had more decimals that was allowed by the schema.
In my case the reason was that in one field there was written "|" = chr$(124) and the separator was in my case "|" = chr$(179).
MS SQL to not make a difference between both characters. I eliminated the chr$(124) and then the import by BCP works fine.
Open the files in notepad++. GO to View tab->show symbols->show all characters. I was also facing the same issue in .tsv files.one tab was misplaced.
Late, but still: In my case I exactly got this one
SQLState = 22001, NativeError = 0
Error = [Microsoft][ODBC Driver 11 for SQL Server]String data, right truncation
And the problem was that the schema changed. The target database had two new fields. Once I installed the previous schema, the import succeeded.
After spending 4 hrs, doing a ton of trail and error, I found that the solution can be as simple as the table where you are importing the data to should have a suitable schema for the file that you trying to import.
ex: In my case. I was importing a .csv with 667,aaa,bbb into a table that has a schema of
int(4),char(2),char(2) causing String Data, Right Truncation.
My bcp was ignoring any of those newline characters like \r, \n, \r\n\, 0x0d, 0x0a, 0x0d0x0a, etc.. The only sulution I found was to include "real" newline directly into the bcp command. I think this works because the csv was generated on the same server as the bcp is running on. When I transfer the csv to the mssql server manually then 0x0a works as well inside BULK INSERT.
Please note, that nl1=^ must be followed by two new blank lines.
my_script.bat:
#echo off
setlocal enableDelayedExpansion
set nl=^
set cmd=bcp db_name.db_schema.my_table in stats.csv -w -t, -r "!nl!" -S my_server -U my_username -P password123
!cmd!

Simple mysql into text file: How to avoid the first line being written?

This is a simple .bat file which currently 'works'; I'm looking to avoid having the field name as the first line in the text file.
C:\xampp\mysql\bin\mysql -u sample -pnotpass --database test -e "SELECT url FROM single WHERE fold = 'bb' AND dir= 'test_folder' ; " > test123.txt
Obviously, it's not a "Windows related question"; Is there a way to ask mysql to only print the results and skip the field name?
Thanks
There's a "column-names" paramater which defaults to true. Just set it to false.
C:\xampp\mysql\bin\mysql -u sample -pnotpass --database test --column-names=false -e "SELECT url FROM single WHERE fold = 'bb' AND dir= 'test_folder' ; " > test123.txt
It depends on the precise format you require, but I'm guessing that using SELECT INTO OUTFILE would probably be a step in the right direction. It would also remove the need to redirect the content to a file, although you'll need to remove that file once you've finished with it (or at the start of the batch file) otherwise MySQL will spit tacks the next time it tries to create it and discovers it already exists.
For example, to create a CSV style file you could use:
SELECT url INTO OUTFILE "\wherever\test123.txt"
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY "\n"
FROM single WHERE fold = 'bb' AND dir= 'test_folder' ;
Incidentally, the user in question must have the FILE privilege for INTO OUTFILE to work.