Importing CSV file in Talend - how to set options to match Excel - csv

I have a CSV file that I can open in Excel 2012 and it comes in perfectly. When I try to setup the metadata for this CSV file in Talend the fields (columns) are not splitting the same was as Excel splits them. I suspect I am not properly setting the metadata.
The specific issue is that I have a column with string data in it which may contain commas within the string. For example suppose I have a CSV file with three columns: ID, Name and Age which looks like this:
ID,Name,Age
1,Ralph,34
2,Sue,14
3,"Smith, John", 42
When Excel reads this CSV file it looks at the second element of the third row ("Smith, John") as a single token and places it into a cell by itself.
In Talend it trys to break this same token into two since there is a comma within the token. Apparently Excel ignores all delimeters within a quoted string while Talend by default does not.
My question is how to I get Talend to behave the same as Excel?

if you use tfileinputdelimited component to read this csv file, you can use delimeter as "," and under csv options properties of this component you should enable Text Enclosure """ option or even if you use metadata there would be an option to define string/text enclosure - here you should mention """ to resolve your problem

Related

ADF Copy Activity Fails CSV to Parquet when CSV has space in header column

When using a copy activity in Azure Data Factory to copy a typical CSV file with a header row into Parquet sink, the SINK fails with the following error due to the column names in the CSV having spaces in the header.
The column name is invalid. Column name cannot contain these
character:[,;{}()\n\t=]
The CSV is pipe delimited and displays just fine using the preview feature of the dataset with the first row marked as the header. I see no options to handle this use-case on the parquet side (sink) of the copy activity. I realize this can probably be addressed using a data flow to transform column names to remove spaces, but does that mean the native copy activity is incapable of handling this condition where a space in included in a header row?
EDIT: I should have added that dataset uses default mappings so that we can use the same dataset for any CSV to PARQUET copy. The answer provided will work for explicit mappings, but we don't see any resolution for folks who use default/dynamic mappings since we do not have access to the column names to remove spaces.
As we can note from the official Doc here
Error code: ParquetInvalidColumnName
Message: The column name is invalid. Column name cannot contain these character:[,;{}()\n\t=]
Cause: The column name contains invalid characters.
Resolution: Add or modify the column mapping to make the sink column name valid.
If you would like continue to use copy activity, there are few workarounds
1. make sure you have selected Column delimiter as Pipe(|)
2. If feasible, in mapping settings > import schema and rename the column name without spaces in destination column.
This is still an ongoing issue or request, follow here for more.

structure in getMetadata activity for csv file dataset ignores an extra comma in the files first row

I am using a reference CSV file with just the correct number and name of columns and want to compare its structure with that of incoming CSV files before proceeding to use Copy Data to import the incoming CSV data into Azure SQL. Files arriving in blob storage trigger this pipeline.
The need to validate the structure has arisen due to random files arriving with a trailing comma in the header row which causes a failure in the copy data pipeline as it sees the trailing comma as an extra column.
I have set up a getMetadata for both the reference file & the incoming files. Using an If Condition, I compare schemas.
The problem I have is that the output of getMetadata is ignoring the trailing comma.
I have tried 'column count' & 'structure' arguments. The same problem either way as the getMetadata fails to see the trailing comma as an issue.
Any help appreciated
I tried with extra commas in header of csv file. Its not ignoring them
reading those extra commas also as columns.
Please check below screenshots.

csv data with comma values throws error while processing the file through the BizTalk flatfile Disassembler

I'm going to a pick a csv file in BizTalk and after some process I wanted to update it with two or more different systems.
In order to getting the csv file, I'm using the default Flatfile Disassembler for breaking it and constructing it as XML with the help of genereted schema. I can do that successfully with some consistent data however if I use a data with comma in it (other than delimiters), BizTalk fails!
Any other way to do this without using a custom pipeline component?
Expecting a simple configuration within the flatfile disassembler component!
So, here's the deal. BizTalk is not failing. Well, it is, but that is the expected and correct behavior.
What you have in an invalid CSV file. The CSV specification disallows the comma in field data unless a wrap character is used. Either way, both are reserved characters.
To accept the comma in field data, you must choose a wrap character and set that in the Wrap Character property in the Flat File Schema.
This is valid:
1/1/01,"Smith, John", $5000
This is not:
1/1/01,Smith, John, $5000
Since your schema definition has ',' as delimiter, flat file disassembler will consider the data with comma as two fields and will fail due to mismatch in columns.
You have few options:
Either add a new field to schema if you know , in data will only be present in a particular field.
Or change the delimiter in flat file from , to |(pipe) or some other character so that data does not conflict with delimiter.
Or as you mentioned manipulate the flat file in a custom pipeline component, which should be last resort if above two are not feasible.

How to Convert a Comma-Separated File to Pipe-Delimited File in Powershell?

I am downloading CSV files which are comma-separated. The problem i'm having is that the commas are screwing-up my import into a database table (SQL Server). For example, I have a header row called hotel_name, but some of the names are like the following:
HOTEL_NAME
hilton
cambridge,the
The problem is that fields containing a comma in the hotel name will move to the adjacent column, like this I'm wondering if converting from CSV to a pipe-delimited format will work.
The problem i'm having is that i'm not sure how to get started. I've tried following the Powershell documentation but get basic errors. I think this is because i'm new to Powershell and not understanding something. Can someone please post a script of how to change the comma-separated file to a pipe-delimited file?
Sorry if this is confusing, i'm finding the formatting on StackOverflow to be a bit crazy.
Taken from Dealing with commas in a CSV file
Use " to wrap data that contains a comma.
For example
Server000,"Microsoft(R) Windows(R) Server 2003, Enterprise Edition"

Loading comma separated .csv file into database

I was trying to load a .csv file into my database. It was a comma delimited file and for one of the columns there is a comma(,) in between the data just like Texas,Houston can some one help me how to get rid of the comma in between. the package which i have created recognizing the value after the comma as a new column but it should not be like that. Can any of the guys help me in this. I was getting error in the Flat file source itself. I thought of using Derived column but the package is failing at the source point itself.
Well some "comma" delimited files have ,"something or other", when there is a string and only use ,numeric_value, when its a number type. If your file is like this then you can preprocess your file changing ," for some (other) rare character, and similarly ", then replace the , if it occurs between the two rare characters. Or you can count the comma in any line and if its greater than the number pf delimited columns, manually frocess the exceptions