stuck with data conversion SSIS - ssis

I'm stuck with a conversion problem...or atleast at datatype problem.
Trying to read a csv-file and update a SQL-database with its content.
The column I have problems with have numbers like 64,51 (at most 3 digits with 2 decimals).
In the database I have set the datatype as decimal(3,2) and in the flat file Connection manager I have set it as decimal(DT_DECIAL) with the scale of 2.
Along the flow I do a Derived column, merging two columns into one and then convert the new column.
Looking in the advanced editor of the OLE DB-destination I can see that in the Input Columns the column is set as DT_DECIMAL, but in the External column its set as a DT_NUMERIC.
How do I change that?
I can change the properties but every time it reverts back to numeric when I press OK.
The errormessage says: "Conversion failed because the data value overflowed the specified type."
Thanks for all tips on this!

Related

SSIS data load truncates values into destination table

I have an SSIS package with a simple Source(vertica query) and Destination (sql DB). When I load the data my data values are cut off.
For example, I have a Country code and this is listed as "C" instead of "CN" . I tried to put a DATA CONVERSION and change the data type to DT_STRING, which normally works, but this time it doesn't seem to do anything. Any idea on how I can handle these truncation's. I have mapped the field lengths all the same from source to destination.
Go into the Advanced Properties of the Source component, and go into each of the Output Columns that has truncated data, and set the Length property of each of those columns to the maximum possible length that the data in that column can be.
Also take out your data conversion component, since you shouldn't need it and it might interfere with the results of the above change.

SSIS 2012 column header is too long for column width Extracting fixed width flat file

I am attempting to extract a table from sql database into a fixed width flat file.
The file should have a column header
I am attempting to recreate a file that already existed where the header for certain columns(for example Gender with a width of 1) has a column name that is too long for it's column format.
The existing file just cuts off these column headers, so Gender(the db column name and input column to the destination becomes 'G' - just what will fit.. but when I attempt to reproduce the extract in SSIS 2012 by pointing at the existing file while creating the flatFile connectionManager It works without a header, but not when I check "column header in first data row"
Is there a way to change/shorten the column names to just what will fit in the format? I am using "ragged right" file format and the data looks perfect without column headers.
Any help is appreciated.
Steve
SSIS really likes consistent metadata. The flat file definition specifies that gender is a length of one and it's going to hold the column header to the same standard that it holds the data. My experience with fixed width files is that they've never had headers, which is painful when they're a few thousand bytes wide, which is likely due to the this problem.
What you can do is to manually specify a header row in the Flat File Destination.
Within my Connection Manager, I uncheck the Column Names in First Row and increment the Header Rows to Skip value to 1.
In my example, I used the following query
SELECT
*
FROM
(
VALUES
('AAAAAAAAAAAAAAAAAA','BBBBBBBBBBBBBBBBBBBBBBBB','M','CCCCCCC')
)D(c1, c2, Gender, c4);
This results in an output file that looks like
Col1Is18BytesWide NextColumnAlignsWithNextGenderSeeWhatIDidThere
AAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBMCCCCCCC
That may or may not be the solution you're looking for. I think it'd drive me mad seeing column headers not aligning with the data values but you never know how other systems expect their data.

Have SSIS detect the column sizes of a csv file

I'm trying to import a csv file into SQL using SSIS and am hitting a fundamental flaw.
SSIS seems to determine that all fields are varchar(50), even though it correctly identifies the comma delimiter.
This is causing issues when I try to send the data to my table in SQL.
Is there a way of making it recognise that a field of length 3 is actually a field of length 3, and not 50?
Thanks
Yes, there's a Suggest Types function in the Flat File Connection Manager Editor.
Assume you have got a CSV file shown in the first image.
Create a new Flat file connection, and browse this file on your computer. The Columns tab shows the sample of the file.
Click Advanced tab. There you can see all columns have DT_STR type with the length of 50. What you can see is the Suggest Types... button. Click this.
Set parameters as you like. Defaults are all right in my case. Click OK.
Now the first column has the type of DT_STR with the length of 1. (The other two columns have got new types as well. The Number column got DT_I1 (because we choosed the smallest appropriate integer type option), and the Date column got DT_DATE.

Issue with SSIS on flat files to tables with fixed position

I have a couple of questions about the task on which I am stuck and any answer would be greatly appreciated.
I have to extract data from a flat file (CSV) as an input and load the data into the destination table with a specific format based on position.
For example, if I have order_id,Total_sales,Date_Ordered with some data in it, I have to extract the data and load it in a table like so:
The first field has a fixed length of 2 with numeric as a datatype.
total_sales is inserted into the column of total_sales in the table with a numeric datatype and length 10.
date as datetime in a format which would be different than that of the flat file, like ccyy-mm-dd.hh.mm.ss.xxxxxxxx (here x has to be filled up with zeros).
Maybe I don't have the right idea to solve this - any solution would be appreciated.
I have tried using the following ways:
Used a flat file source to get the CSV file and then gave it as an input to OLE DB destination with a table of fixed data types created. The problem here is that the columns are loaded, but I have to fill them up with zeros in case the date when it is been loaded or in most of the columns if I am not utilizing the total length then it has to preceded with zeros in it.
For example, if I have an Orderid of length 4 and in the flat file I have an order id like 201 then it has to be changed to 0201 when it is loaded in the table.
I also tried another way of using a flat file source and created a variable which takes the entire row as an input and tried to separate it with derived columns. I was to an extent successful in getting it, but at last the data type in the derived column got fixed to Boolean type explicitly, which I am not able to change to the data type I want.
Please give me some suggestions on how to handle this issue...
Assuming you have a csv file in the following format
order_id,Total_sales,Date_Ordered
1,123.23,01/01/2010
2,242.20,02/01/2010
3,34.23,3/01/2010
4,9032.23,19/01/2010
I would start by creating a Flat File Source (inside a Data Flow Task), but rather than having it fixed width, set the format to Delimited. Tick the Column names in the first data row. On the column tab, make sure row delimiter is set to "{CR}{LF}" and column delimiter is set to "Comma(,)". Finally, on the Advanced tab, set the data types of each column to integer, decimal and date.
You mention that you want to pad the numeric data types with leading zero's when storing them in the database. Numeric data types in databases tend not to hold leading zero's. So you have two options; either hold the data as the type they are in the target system (int, decimal and dateTime) or use the Derived Column control to convert them to strings. If you decide to store them as strings, adding an expression like
"00000" + (DT_WSTR, 5) [order_id]
to the Derived Column control will add up to 5 leading zeros to order id (don't forget to set the data type length to 5) and would result in an order id of "00001"
Create your target within a Data Flow Destination and make the table/field mappings accordingly (or let SSIS create a new table / mappings for you).

MS Access setting to ignore date conversion error

An Access DB imports a fixed width text file; one column is mostly dates.
When the date is not available, the file's creator actually uses the string "Null"
Access puts the row in the table with that field actually null.
But, when the files started coming with different field widths, I copied the DB, tweaked the starting/width values in the input spec, and imported. NOW, all the rows with null get logging in (table)_import_errors as an error converting text to date.
I have found no setting (not that I changed any) to explain it. One difference is that although both DBs are in Access 2000 format, the original is on a machine that still has Access 2000, while the new one is being handled by Access 2003.
Is that a behavior change in the Access version? Is pre-processing the file the only solution?
Thanks, David. That's what I would have done (except for the Excel part) if it had not fixed itself. I posted that, but apparently someone didn't like the public admission that Access has bugs.
The only thing that changed was that two other columns in the fixed width plain text input was wider. Yet Access "decided" to discard the whole row instead of just the date field for three consecutive attempts. The fourth time, it still reported it as an error but imported the rest of the row.
So, when Access misbehaves for no good reason, try again a time or two, then try explicitly coding the conversion from text.
Two possibilities:
Use a buffer field or buffer table that imports the date field into a text field. Then you can process that into the appropriate values in the final destination field.
Use a SQL import instead of DoCmd.TransferText. What you do in that case is use a connect string in the FROM clause so you can then process the date field in your SELECT:
SELECT Sheet1.FirstField, Replace(Sheet2.DateField, "NULL", Null) As DateField
FROM Sheet1 IN 'C:\Import\Spreadsheet.xls'[Excel 5.0;HDR=YES;IMEX=1;];
Convert that into an INSERT query and you're golden.