Leading zeros after exporting to csv from SQL query - sql-server-2008

I have a SQL script which outputs data to csv file. The script works fine however the leading zero is stripped from the phone number when I export to csv.
SQL:
SELECT RTRIM(Mobile_Telephone) AS Mobile_Number FROM TABLE
Datatype: char(20), not null
I am exporting data to csv using SSIS package
Can you please advise how I can preserve Leading zeros after exporting to csv.
Thanks
Aruna

Try using CONCAT syntax after SELECT
SELECT CONCAT('Phone Number Value') FROM table
See if it helps

Related

Export non-varchar data to CSV table using Trino (formerly PrestoDB)

I am working on some benchmarks and need to compare ORC, Parquet and CSV formats. I have exported TPC/H (SF1000) to ORC based tables. When I want to export it to Parquet I can run:
CREATE TABLE hive.tpch_sf1_parquet.region
WITH (format = 'parquet')
AS SELECT * FROM hive.tpch_sf1_orc.region
When I try the similar approach with CSV, then I get the error Hive CSV storage format only supports VARCHAR (unbounded). I would assumed that it would convert the other datatypes (i.e. bigint) to text and store the column format in the Hive metadata.
I can export the data to CSV using trino --server trino:8080 --catalog hive --schema tpch_sf1_orc --output-format=CSV --execute 'SELECT * FROM nation, but then it gets emitted to a file. Although this works for SF1 it quickly becomes unusable for SF1000 scale-factor. Another disadvantage is that my Hive metastores wouldn't have the appropriate meta-data (although I could patch it manually if nothing else works).
Anyone an idea how to convert my ORC/Parquet data to CSV using Hive?
In Trino Hive connector, the CSV table can contain varchar columns only.
You need to cast the exported columns to varchar when creating the table
CREATE TABLE region_csv
WITH (format='CSV')
AS SELECT CAST(regionkey AS varchar), CAST(name AS varchar), CAST(comment AS varchar)
FROM region_orc
Note that you will need to update your benchmark queries accordingly, e.g. by applying reverse casts.
DISCLAIMER: Read the full post, before using anything discussed here. It's not real CSV and you migth screw up!
It is possible to create typed CSV-ish tables when using the TEXTFILE format and use ',' as the field separator:
CREATE TABLE hive.test.region (
regionkey bigint,
name varchar(25),
comment varchar(152)
)
WITH (
format = 'TEXTFILE',
textfile_field_separator = ','
);
This will create a typed version of the table in the Hive catalog using the TEXTFILE format. It normally uses the ^A character (ASCII 10), but when set to ',' it resembles the same structure as CSV formats.
IMPORTANT: Although it looks like CSV, it is not real CSV. It doesn't follow RFC 4180, because it doesn't properly quote and escape. The following INSERT will not be inserted co:
INSERT INTO hive.test.region VALUES (
1,
'A "quote", with comma',
'The comment contains a newline
in it');
The text will be copied unmodified to the file without escaping quotes or commas. This should have been written like this to be proper CSV:
1,"A ""quote"", with comma","The comment contains a newline
in it"
Unfortunately, it is written as:
1,A "quote", with comma,The comment contains a newline
in it
This results in invalid data that will be represented by NULL columns. For this reason, this method can only be used when you have full control over the text-based data and are sure that it doesn't contain newlines, quotes, commas, ...

MySQL Workbench data import wrong value inserted

I am importing some data from a spreadsheet in the form of csv. All the data seems to import fine except the amount figure. The data type I have set is "Decimal (15,2)" when I import the value to MySQL the value is different from the original. Example
An amount in csv which is 14,250.25 when imported to MySQL it shows as 14.00
I'm not too sure what went wrong. Please advice.
Prem.
Found out myself, if it would help others.
I used the ',' as the delimiter, the currency value i had in the spreadsheet (csv) had commas if it was 1,000 or above, so when importing the comma in the currency separated itself and got dumped in the table...
example:
input: 12000.00 -> 12000.00 output
this will get imported without any problem
input: 12,000.00 -> 12.00 output
if you have assigned "," as the csv delimiter then the number after comma will get split to a new column
Hope it Helps
:)

SSIS Convert Between Unicode and Non-Unicode Error with LARGE Field

I have been struggling with this error now for days now and have tried everything I know. I have an SQL statement that pulls data from several tables into another table. The field in question is a NTEXT field from a SQL 2000 database, which I now import into a SQL 2008 R2 table that is NVARCHAR(MAX) data type because I though the issue was the NTEXT data type. However the SSIS package that is just an OLE DB Source (with 1 field) into an Excel Destination is still giving me the Unicode and Non-Unicode Error!! Several rows of data are over 8000 characters in length. Please help ...
After a lot of pain I finally came to the conclusion that Exporting to EXCEL is not possible so I turned to CSV. I used "Flat File Destination" object, pointed to a CSV that I had created with just the Headers. The Text Qualifier was set to double quotes. In the Columns section I set the Row delimiter to {CR}{LF} and the Column delimiter to Comma{,} because it is a CSV! The final part of the puzzle was to remove and double quotes, Carriage Returns and Line Feeds. I also had to convert the NTEXT field to VARCHAR(MAX) because REPLACE will not work with NTEXT. This is what I ended up with for the columns that had these "invalid characters".
REPLACE(REPLACE(REPLACE(CONVERT(VARCHAR(MAX),[MyNTEXTColumn]), CHAR(13),' '), '"', ''), CHAR(10),'') AS 'Corrected Output'
I replaced {CR} CHAR(13) with a space to that we could have it formatted well for the consumer. I hope this helps someone out one day.

BCP : Retaining null values as '\N'

Have to move a table from MS SQL Server to MySQL (~ 8M rows with 8 coloumns). One of the coloumns (DECIMAL Type) is exported as empty string with "bcp" export to a csv file. When I'm using this csv file to load data into MySQL table, it fails saying "Incorrect decimal value".
Looking for possible work arounds or suggestions.
I would create a view in MS SQL which converts the decimal column to a varchar column:
CREATE VIEW MySQLExport AS
SELECT [...]
COALESCE(CAST(DecimalColumn AS VARCHAR(50)),'') AS DecimalColumn
FROM SourceTable;
Then, import into a staging table in MySQL, and use a CASE statement for the final INSERT:
INSERT INTO DestinationTable ([...])
SELECT [...]
CASE DecimalColumn
WHEN '' THEN NULL
ELSE CAST(DecimalColumn AS DECIMAL(10,5))
END AS DecimalColumn,
[...]
FROM ImportMSSQLStagingTable;
This is safe because the only way the value can be an empty string in the export file is if it's NULL.
Note that I doubt you can cheat by exporting it with COALESCE(CAST(DecimalColumn AS VARCHAR(50)),'\N'), because LOAD INFILE would see that as '\N', which is not the same as \N.

How to output data from iSQL to csv file with column names

I am trying to query a Sybase using iSQL client and export the query results to a text file or CSV file with column name. However the column headings are not exported to the file. I tried below script it shows error message, below the working script without column heading and error script, appreciate any valuable advice.
working sql:
select * from siebel.S_ORG_EXT;
OUTPUT TO 'C:\\Siebel SQLs\\Account.CSV' FORMAT TEXT
DELIMITED BY ';' QUOTE ''
Not working sql :
select * from siebel.S_ORG_EXT;
OUTPUT TO 'C:\\Siebel SQLs\\Account.CSV' FORMAT TEXT
DELIMITED BY ';' QUOTE '' WITH COLUMN NAMES;
If you are using Sybase iAnywhere the WITH COLUMN NAMES option is not recognized by that Sybase product. Just thought I'd mention this for those like myself who have struggled with a similar issue.
HTH
You can try following query:
SELECT * FROM siebel.S_ORG_EXT; OUTPUT TO 'C:\\Siebel SQLs\\Account.CSV' FORMAT ASCII DELIMITED BY ';' QUOTE '' WITH COLUMN NAMES;
Alternatively you could use a different SQL client. For example Squirrel SQL which supports JDBC connections. In other SQL clients you will need to import the jconn2.jar which is part of your local web client installation.