Import csv in Rapidminer is not loading data properly - rapidminer

Importing csv in Rapidminer is not loading data properly in the attributes/ columns and returns errors.
I have set the parameter values correctly in the 'Data Import Wizard'.
Column Separation is set to comma and when I check the "Use Quotes" parameter I see that there are too many "?" appear in the columns even though there is data in the actual csv file.
And when I do not check the “Use Quotes” option then I notice that the content of the columns are distributed across different columns, i.e., data does not appear in the correct column. It also gives error for the date column.
How to resolve this? Any suggestions please? I saw a lot of Rapidminer videos and read about it but did not help.
I am trying to import twitter conversations data which I exported from a 3rd party SaaS tool which extracts Twitter data for us.
Could someone help me soon please? Thanks, Geeta

It's virtually impossible to debug this without seeing the data.
The use quotes option requires that each field is surrounded by double quotes. Do not use this if your data does not contain these because the input process will import everything into the first field.
When you use comma as the delimiter, the observed behaviour is likely to be because there are additional commas contained in the data. This seems likely if the data is based on Twitter. This confuses the import because it is just looking for commas.
Generally, if you can get the input data changed, try to get it produced using a delimiter that cannot appear in the raw text data. Good examples would be | or tab. If you can get quotes around the fields, this will help because it allows delimiter characters to appear in the field.
Dates formats can be handled using the data format parameter but my advice is to import the date field as a polynominal and then convert it later to date using the Nominal to Date operator. This gives more control especially when the input data is not clean.

Related

ADF: Is it possible to gracefully handle issues with a csv to Azure SQL import?

I have a comma separated csv file with double quotes as text identifier. Sometimes the source system export is incorrect, e.g.:
A text field includes a double quote causing ADF to think there is one extra column and fail
A text field includes the escape character causing ADF to concatenate 2 columns and failing with the error that there is one column less than expected.
The source system vendor is unable to fix this, so these errors will happen every now and then. Is it possible for ADF to just save the whole row into an logfile/logtable and just skip this line?
I am aware of this question, but I can't change the escape character in this case.
Thanks in advance for your answer!
Johan
In ADF, if you are using the Copy Activity, you will use the Fault Tolerance and Enable Logging features to achieve this. When transforming data with a data flow, you will use the "Error row handling" feature to achieve it.

Unable to import 3.4GB csv into redshift because values contains free-text with commas

And so we found a 3.6GB csv that we have uploaded onto S3 and now want to import into Redshift, then do the querying and analysis from iPython.
Problem 1:
This comma delimited file contains values free text that also contains commas and this is interfering with the delimiting so can’t upload to Redshift.
When we tried opening the sample dataset in Excel, Excel surprisingly puts them into columns correctly.
Problem 2:
A column that is supposed to contain integers have some records containing alphabets to indicate some other scenario.
So, the only way to get the import through is to declare this column as varchar. But then we can do calculations later on.
Problem 3:
The datetime data type requires the date time value to be in the format YYYY-MM-DD HH:MM:SS, but the csv doesn’t contain the SS and the database is rejecting the import.
We can’t manipulate the data on a local machine because it is too big, and we can’t upload onto the cloud for computing because it is not in the correct format.
The last resort would be to scale the instance running iPython all the way up so that we can read the big csv directly from S3, but this approach doesn’t make sense as a long-term solution.
Your suggestions?
Train: https://s3-ap-southeast-1.amazonaws.com/bucketbigdataclass/stack_overflow_train.csv (3.4GB)
Train Sample: https://s3-ap-southeast-1.amazonaws.com/bucketbigdataclass/stack_overflow_train-sample.csv (133MB)
Try having different delimiter or use escape characters.
http://docs.aws.amazon.com/redshift/latest/dg/r_COPY_preparing_data.html
For second issue, if you want to extract only numbers from the column after loading into char use regexp_replace or other functions.
For third issue, you can as well load it into VARCHAR field and then use substring cast(left(column_name, 10)||' '||right(column_name, 6)||':00' as timestamp)
to load it into final table from staging table
For the first issue, you need to find out a way to differentiate between the two types of commas - the delimiter and the text commas. Once you have done that, replace the delimiters with a different delimiter and use the same as delimiter in the copy command for Redshift.
For the second issue, you need to first figure out if this column needs to be present for numerical aggregations once loaded. If yes, you need to get this data cleaned up before loading. If no, you can directly load this as char/ varchar field. All your queries will still work but you will not be able to do any aggregations (sum/ avg and the likes) on this field.
For problem 3, you can use Text(date, "yyyy-mm-dd hh:mm:ss") function in excel to do a mass replace for this field.
Let me know if this works out.

How can I quickly reformat a CSV file into SQL format in Vim?

I have a CSV file that I need to format (i.e., turn into) a SQL file for ingestion into MySQL. I am looking for a way to add the text delimiters (single quote) to the text, but not to the numbers, booleans, etc. I am finding it difficult because some of the text that I need to enclose in single quotes have commas themselves, making it difficult to key in to the commas for search and replace. Here is an example line I am working with:
1239,1998-08-26,'Severe Storm(s)','Texas,Val Verde,"DEL RIO, PARKS",'No',25,"412,007.74"
This is FEMA data file, with 131246 lines, I got off of data.gov that I am trying to get into a MySQL database. As you can see, I need to insert a single quote after Texas and before Val Verde, so I tried:
s/,/','/3
But that only replaced the first occurrence of the comma on the first three lines of the file. Once I get past that, I will need to find a way to deal with "DEL RIO, PARKS", as that has a comma that I do not want to place a single quote around.
So, is there a "nice" way to manipulate this data to get it from plain CSV to a proper SQL format?
Thanks
CSV files are notoriously dicey to parse. Different programs export CSV in different ways, possibly including strangeness like embedding new lines within a quoted field or different ways of representing quotes within a quoted field. You're better off using a tool specifically suited to parsing CSV -- perl, python, ruby and java all have CSV parsing libraries, or there are command line programs such as csvtool or ffe.
If you use a scripting language's CSV library, you may also be able to leverage the language's SQL import as well. That's overkill for a one-off, but if you're importing a lot of data this way, or if you're transforming data, it may be worthwhile.
I think that I would also want to do some troubleshooting to find out why the CSV import into MYSql failed.
I would take an approach like this:
:%s/,\("[^"]*"\|[^,"]*\)/,'\1'/g
:%s/^\("[^"]*"\|[^,"]*\)/'\1'/g
In words, look for a double quoted set of characters or , \|, a non-double quoted set of characters beginning with a comma and replace the set of characters in a single quotation.
Next, for the first column in a row, look for a double quoted set of characters or , \|, a non-double quoted set of characters beginning with a comma and replace the set of characters in a single quotation.
Try the csv plugin. It allows to convert the data into other formats. The help includes an example, how to convert the data for importing it into a database
Just to bring this to a close, I ended up using #Eric Andres idea, which was the MySQL load data option:
LOAD DATA LOCAL INFILE '/path/to/file.csv'
INTO TABLE MYTABLE FIELDS TERMINATED BY ',' LINES TERMINATED BY '\r\n';
The initial .csv file still took a little massaging, but not as much as I were to do it by hand.
When I commented that the LOAD DATA had truncated my file, I was incorrect. I was treating the file as a typical .sql file and assumed the "ID" column I had added would auto-increment. This turned out to not be the case. I had to create a quick script that prepended an ID to the front of each line. After that, the LOAD DATA command worked for all lines in my file. In other words, all data has to be in place within the file to load before the load, or the load will not work.
Thanks again to all who replied, and #Eric Andres for his idea, which I ultimately used.

Lose data in random fields when importing from file into table using phpmyadmin

I have an access DB. I exported tables to xlsx. Then I saved as .ods using openOffice
because I found out that phpmyadmin-mysql no longer supports excel files. I have my mySQL database formated exactly as it should to accept the data. I import and everything seems fine except one little detail.
In some fields, the value is NULL instead of the value it should have according to the .ods file. Some rows show the same value for that field correctly, some show NULL.
Also, the "faulty" rows have some fields that show the value 0 for fields that where empty in the imported file (instead of NULL). Default value for those fields in mySQL is NULL. Each row has many fields like that and all of the same data type (tinyint). Some appear correctly NULL and some have the value 0....
I can't see a pattern on all these.
Any help is appreciated.
Check to see that imported strings have ("") quotes and NULL do not and that all are separated appropriately, usually a "," comma with the record/row delimited by ";" semicolon. Best way to check what the MySQL is looking for is to export some existing data to the same format and check it against what you are trying to import. One little missed quote and the deal is off. Be consistent in the use of either double " quotes or single ' quotes. also the ` character is not used as I think. If you are "squishing" your data through an application that applies "smart quotes" like MS word does or "Open Office??' this too can cause issues. Add the word NULL either inside or without quotes in your csv import where values appropriate.

MS Access Import from Text File problems

I'm trying to import a text file into an access database. It's not one I've written myself but the spec for the delimited text file is set up properly and the file imports properly using the wizard. When I try to use the import functions of the app itself, the ImportError table tells "Field Truncation" for one of the fields. Any help would be appreciated.
I would suggest examining each column that you're bringing in, and then measuring that against your table column properties in table "Design" mode.
Common things I've seen throw this error are:
Imported text exceeding the 255 character limit of a field (in which case you can change the field type to memo)
Date fields being set up as short date format, and then trying to import long dates/times into the field.
Text other than "yes/no/true/false" being imported into Yes/No/True/False fields.
Double-check your columns for similar names, and then check the data being imported. Sometimes when multiple people are working on a project and appending data, columns with similar names can get confused...especially if the column is collapsed so that its name is not entirely showing.