I have been struggling with this error now for days now and have tried everything I know. I have an SQL statement that pulls data from several tables into another table. The field in question is a NTEXT field from a SQL 2000 database, which I now import into a SQL 2008 R2 table that is NVARCHAR(MAX) data type because I though the issue was the NTEXT data type. However the SSIS package that is just an OLE DB Source (with 1 field) into an Excel Destination is still giving me the Unicode and Non-Unicode Error!! Several rows of data are over 8000 characters in length. Please help ...
After a lot of pain I finally came to the conclusion that Exporting to EXCEL is not possible so I turned to CSV. I used "Flat File Destination" object, pointed to a CSV that I had created with just the Headers. The Text Qualifier was set to double quotes. In the Columns section I set the Row delimiter to {CR}{LF} and the Column delimiter to Comma{,} because it is a CSV! The final part of the puzzle was to remove and double quotes, Carriage Returns and Line Feeds. I also had to convert the NTEXT field to VARCHAR(MAX) because REPLACE will not work with NTEXT. This is what I ended up with for the columns that had these "invalid characters".
REPLACE(REPLACE(REPLACE(CONVERT(VARCHAR(MAX),[MyNTEXTColumn]), CHAR(13),' '), '"', ''), CHAR(10),'') AS 'Corrected Output'
I replaced {CR} CHAR(13) with a space to that we could have it formatted well for the consumer. I hope this helps someone out one day.
Related
I've created a table in SQL Server 2016 with this definition:
CREATE TABLE Stage.Test (f1 DATE NULL);
INSERT INTO Stage.Test (f1) VALUES ('1/1/2000');
Notice the f1 column uses the DATE data type.
Then I created a data flow task in SQL Server Data Tools (VS 2019) to load data from that table. I created an OLEDB Source Component and set the source table to Stage.Test. But when I examine the data type of the "f1" column (in both the 'External Column' and 'Output Column' columns), it says it's a Unicode string:
Why is it choosing a Unicode string instead of DT_DATE?
I haven't seen this exact example when it comes to date fields but SSIS converts data automatically when it has detected the field to be of a certain type. Perhaps it's the '/' in your date field that does it. We do not use this date format over in these parts of the world so I've never had the problem. You can especially see this when you import excel files with SSIS. I usually have this problem with strings where unicode strings can sometimes become non-unicode strings.
A way to fix could be to:
edit the sql query to explicitly cast the field as date
add a conversion step in the data flow after the source (like a derived column getting the parts of the string in the right order)
try to change the output datatype by right clicking on the source in the data flow and using the advanced editor and then edit the output column datatype:
SSIS Source Advanced Editor Output
I'm not sure if that 3 would work with this date format issue as I do not have experience with the format myself but it is working fine for my unicode/non-unicode problem.
I have the following problem:
I have an SSIS package that starts with a query executed at an Oracle DB and I would like to export a Fixed Width flat file with ANSI 1253 Code Page. I get an error:
The data conversion for column [column_name] returned status value 4
and status text "Text was truncated or one or more characters had no
match in the target code page"
The problem has to do with the second part of the message, as the width is ok. I tried to use Data Conversion from Toolbox but it didn't work (probably I didn't use it on the right way). I have only select privileges to the database so I cannot add any sql procedures to remove special characters at the query. Also the idea to load data to a staging table wouldn't be the best choice at my case. Does anyone has any idea on how to convert my data without getting this error?
Thanks a lot in advance
Load data using your Source from Oracle DB and keep the data types they are giving you.
add a derived column and cast your column.
(DT_STR,[Insert Length],1252) [columnName]
if the column is ntext you need to do 2 steps to get to string.
(DT_STR...) (DT_WSTR) Ntextcolumn
I’m importing a SQL view to SSIS using the Flat File Connection Manager. One of my columns in SQL has comma(s) in it. (123 Main St, Boston, MA) . When I import the data to SSIS, the commas within the column are being treated as delimiters, and my column is being broken into several columns. I have done a lot of research online, and have followed some workarounds which aren't working for me.
In SQL Server, I added double quotes around the values that have comma(s) in it.
' "'+CAST(a.Address as varchar(100))+'" '
So, 123 Main St, Boston, MA now reads “123 Main St, Boston, MA”
Then in my SSIS Flat File Connection Manager,
In the General tab:
Text Qualifier is set to “
Header Row Delimiter is set to {CR}-{LF}
In the columns tab:
Row delimiter is set to {LF}
Column delimiter is set to Comma {,}
And in the advanced Tab, all of my columns have the Text Qualified set to True.
After all of this, my column with commas in it, is still being separated into multiple columns. Am I missing a step? How can I get the SSIS package to treat my address column as one column and not break it out to several columns?
EDIT: Just to add more specifics. I am pulling from a SQL view that has double quotes around any field that has commas in it. I am then emailing that file and opening it in MS Excel. When I open it the file it read as follows:
123 Main St Boston MA" " (In three cells)
And I need it to read as
123 Main St, Boston, MA (in one cell)
Have a look of this - Commas within CSV Data
If there is a comma in a column then that column should be surrounded
by a single quote or double quote. Then if inside that column there is
a single or double quote it should have an escape charter before it,
usually a \
Example format of CSV
ID - address - name
1, "Some Address, Some Street, 10452", 'David O\'Brian'
Change every comma values with another unique delimiter which values haven't any of the characters inside,like : vertical bar ( | )
Change column delimiter to this new delimiter , and set text qualifier with double quote ( " )
You can automate the replace process using a Script Task before Dataflow Task for replacing delimiters. You can use replace script form here.
Also have a look of these resources.
Fixing comma problem in CSV file in SSIS
How to handle extra comma inside double quotes while processing a CSV file in SSIS
I ended up recreating the package, using the same parameters that are listed in my question. I also replaced this
' "'+CAST(a.Address as varchar(100))+'" '
with this in my SQL view
a.Address
And it now runs as desired. Not sure what was going on there. Thanks to everyone for their comments and suggestions.
I want to import data from an excel sheet into a MySQL database with the MySQL for Excel plugin. In some cells are texts with semicolons and I already figured out this causes a SQL error. I tried escaping the semicolons with backslash but I still get the error message. How can I escape the semicolon?
Kai,
this behaviour is purely the fault of MySQL for Excel, and seem to be a bug.
In the meantime, if you are not keen on changing your Excel data as suggested by others there is a workaround:
In your MySQL-for-Excel window click Options and then select Preview SQL statements before they are sent to the server and Accept.
Then proceed as normal with export / append data using the Add-in, but when a Review SQL script window appears, copy the contents into a different SQL tool (MySQL workbench, HeidiSQL, SQLWorkbench etc), and run. Then click cancel in the Mysql-for-Excel popups, and refresh the query if necessary.
Also: feel free to report the bug at: http://bugs.mysql.com/
Replace the semicolon with some unique text e.g. [SEMICOLON].
Next import the data to SQL and run something like
UPDATE your_table
SET your_field = REPLACE(your_field, '[SEMICOLON]', ';')
WHERE your_field LIKE '%[SEMICOLON]%'
I think all you need to do is consider the requirements Excel has when it imports data from CSV files (the parsing rules are probably the same or similar)
In your case, if a field contains any special characters, just quote the values with double quotes before importing the content in Excel.
So:
UPDATE table
SET field = '"' || field || '"'
WHERE field like '%,%'
The following rules should apply:
Fields containing a line-break, double-quote, and/or commas should be quoted
Any field may be quoted (with double quotes)
A (double) quote character in a field must be represented by two (double) quote characters.
More details: Wikipedia: Comma-separated values
I need to export a result set from a SQL Server stored procedure to a csv file. One of the fields being exported is a notes field which can contain quotes and carriage return/line feeds.
I'm using the SSIS data flow task to get the result set from the sproc and then to a flat file destination.
The problem I'm having is how to deal with the carriage return/line feeds. With the row delimiter being {CR/LF} it starts a new row when it encounters this in the notes field. I'm viewing the output with the preview when creating the flat file destination.
The database notes fields is datatype NVARCHAR(MAX).
I'm also having the same problem when exporting record details to an SSRS report. The notes fields are not persisting the carriage return/line feeds resulting in garbled bunch of text.
Any help would be much appreciated. Been at this for hours.
Thanks
Change field datatype to text or ntext.
You can also do double substitution:
Replace CR and LF with 2 unique character combinations in SP
Replace these char sets with CR and LF in SSIS/SSRS.