I have a comma separated csv file with double quotes as text identifier. Sometimes the source system export is incorrect, e.g.:
A text field includes a double quote causing ADF to think there is one extra column and fail
A text field includes the escape character causing ADF to concatenate 2 columns and failing with the error that there is one column less than expected.
The source system vendor is unable to fix this, so these errors will happen every now and then. Is it possible for ADF to just save the whole row into an logfile/logtable and just skip this line?
I am aware of this question, but I can't change the escape character in this case.
Thanks in advance for your answer!
Johan
In ADF, if you are using the Copy Activity, you will use the Fault Tolerance and Enable Logging features to achieve this. When transforming data with a data flow, you will use the "Error row handling" feature to achieve it.
I'm trying to export a CSV from my client's FluidSurvey's account and import it into a database I've created. I've never actually worked with a CSV before, so excuse my ignorance.
I've looked into this error and none of the solutions seem to be working for me, I'm at a loss, I've been trying to import this file for hours now.
Settings are as follows:
There is already a table with columns for this data to be inserted into.
What am I missing here?
You've showed exported csv file in Excel or Calc. It is impossible to understand how you columns are enclosed. Probably there is some sign other than ' or " Please show exported csv in notepad. This will clear the structure of csv.
I found that Fluidsurveys CSV files had the header two bytes incorrect.
They are only 7F7E. Changing them to the expected Unicode FFFE works as expected - they can be read into Excel with no garbage characters at the start.
Hi all quick question for you.
I have an SSIS2012 package that is reading a flat file (.csv) and is loading it into a SQL Server database table. However, I am getting an error for one of the columns when loading the OLEDB Destination:
[Flat File Source [32]] Error: Data conversion failed. The data conversion for column "Active_Flag" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
I am wondering if this is because in the flat file (which is comma delimited), the values are literally spelled out "TRUE" or "FALSE". The advanced page on the flat file properties has it set to "DT_BOOL" which I thought was right. It was on DT_STRING originally, and that wasn't working either.
In the SQL server table the column is set up as a bit, and allows nulls. Is this because it is literally typed out TRUE/FALSE? What's the easiest way to fix this?
Thanks for any advice!
It actually turned out there was a blank space in front of "True"/"False" in the file. Was just bad data and I missed it. Fixing that solved my issue. Thank you though, I did try that and when that didn't work that's when I knew it was something else.
(SQL Server 2008)
So here's my task ..
I need to export query results to file, and then import that file using SSIS to another DB.
Specific to the task, the data contains every awkward unicode character you can think of, so delimiting with commas, pipes etc is out of the question.
Here are the options SSMS gives me for export format:
Column Aligned
Comma/Tab/Space delimited
Custom delimiter
And here are the options SSIS gives me for a flat file data source:
Delimited (custom)
Fixed Width
Ragged Right
So given that a delimiter character is out of the question ... I cannot see another method that both SSMS & SSIS agree on.
Such as fixed width ?
Seems strange that the 2 closely related MS products have such different options.
Or have I missed something here ?
Any advice appreciated !!
It seems you need to try out different combination of options while creating delimited flat file(for your exported query result).
Try setting Code page to UTF-8 with and without Unicode. Also use Text qualifier as " or any of your choice which you thought might work. Also try using different option for column delimiter.
Once you are able to create delimited file then you have to apply same setting on file while importing to another DB.
I want to open a csv file (saved from openoffice calc) in weka.
I keep getting an error: "wrong number of values. 140 read, 139 expected on line 3."
The csv was already fixed with quotes around the labels. And I count 140 values on the first lines.
What is wrong here?
Link to the file.
Turns out there was an value somewhere for beyond sight in the excel file I was exporting.
I noticed it because all the rows ended with a comma instead of nothing.
Carefully selected only the right reach, copied in a document and works.
Hope this helps somebody else as well.
I had the same error.!!!! I found the solution.
Just remove all the double-quote, single-quote from the .csv, .xls file.
i,e for eg. under the Name column if the value is "john" it throws an error. Make it to john by removing the quotes.
To remove all the quotes, go to the excel file FInd and replace box.
Find what - "
Replace with - (empty space)
I also went through the same problem when I was using Weka and importing a csv file.
The problem is with the wrong formatting of the file
In my file there was a word in one of the columns GOV'T what I just did was removed the "'" and wrote a whole word GOVERNMENT and it worked.
Hope this helps !!
I had the same error. Problem was a sigle quote character in a string value. Solution for me was to eclose the whole string value in double quotes.
So I have to convert
this: ...,Uncharted 3: Drake's Deception,...
to this: ...,"Uncharted 3: Drake's Deception",...
using weka v. 3.8.0
This is because of addition of extra column. So to get rid of that error, select whole of that column and delete that column.
That should work fine. :)
I also encountered with that error. My csv file contains floating numbers. I have solved that problem by replacing "," with "." .
For me all of the above worked. I replaced " ' , with space.
I had the same error before. I changed my .xls files without any blank ranks. Sometimes the Weka loaded too many "," . But if I clear the blank ranks than the Weka could be work.
If you have copied data from another file using Conrol+A, Control+C and control+V, you copied extra columns. if you open csv file in Nodepad you will see comma in the end of each row. you got this error because of the comma in the end of each row.
To avoid this error, press Control and select columns one by one then Control+C now copy it to new File which you will use in weka.
or you can use another method to avoid comma in the end of each row.
I encountered the same problem.
Replacing/ erasing all " and ' with space worked for me!