Bigquery - Handle Double Quotes & Pipe Field Separator in CSV (Federated Table) - csv

I am currently facing issues loading data into big query or even creating federated table as the incoming data is delimited by | pipe symbol with escaped double quotes on fields inside the file
Sample Data (also tried escaping double quote values with double double-quotes i.e "" on field level)
13|2|"\"Jeeps, Trucks & Off-Roa"|"JEEPSTRU"
Create DDL
CREATE OR REPLACE EXTERNAL TABLE `<project>.<dataset>.<table>`
WITH PARTITION COLUMNS (
dt DATE
)
OPTIONS (
allow_jagged_rows=true,
allow_quoted_newlines=true,
format="csv",
skip_leading_rows=1,
field_delimiter="|",
uris=["gs://path/to/bucket/table/*"],
hive_partition_uri_prefix='gs://path/to/bucket/table'
)
Query
SELECT
*
FROM
`<project>.<dataset>.<table>`
WHERE field_ like '%Jeep%'
Error
Error while reading table: <project>.<dataset>.<table>, error message: Error detected while parsing row starting at position: 70908. Error: Data between close double quote (") and field separator.
However, it works if I create the table with the option quote empty character quote="" which makes hard to filter out on SQL query
I need the field_ data to be loaded as "Jeeps, Trucks & Off-Roa
I tried to find various documentation & StackOverflow question (since everything is old or not working - or unlucky me) I am posting this question again.
I have a very basic question > What is the better way to escape double quotes in a column for federated big query table to avoid this problem without preprocessing csv/psv raw data?

This is not problem with external table or bigquery, but rather CSV files feature. I had similar once when I uploaded data to table in UI. I have found some sources(BTW which I cannot find right now) that double quotes should be used twice ("") in CSV file to get such behavior, like using your exaple:
13|2|"""Jeeps, Trucks & Off-Roa"|"JEEPSTRU"
I have tested it in your sample. When I downloaded data to table from csv I got the same error. And after using above it worked as expected. Result field value is:
"Jeeps, Trucks & Off-Roa
I suppose it will work for you as well.
EDIT: I have found it in Basic Rules of CSV on Wikipedia:
Each of the embedded double-quote characters must be represented by a pair of double-quote characters.
1997,Ford,E350,"Super, ""luxurious"" truck"

Related

why excel export csv doesn't quote single quote?

When I export the data from excel as a CSV format it encapsulates some data in double quotes.
E.g.
8" becomes "8""". And I believe this operation is trying to get the database to understand the inside quote later on.
but for single quote 8', it keeps the same and this causes problem(see the pic below) while I am importing the csv.
Why not quoting the 8' into "8'" too?
8' becomes ' while importing, while "8'" will result in 8' And not quoting single quote leads some data loss.
question related:
what does quotechar mean in mysql while importing data?
Excel adds extra quotes on CSV export
After doing so many experiments, I finally found a pretty close answer.
Conclusion first:
It is Mysql Workbench's problem. Its import wizard works badly. I test under Navicat for every test data, and Navicat get all things right.
Single quote can caused unexpected behavior.
Test:
By default, Mysql workbench import wizard takes the first row value as column name, while in Navicat, I can configure that.
(All tests files are excel-exported csv utf8 encoding.)
test1:
e.g.: 8'(only 1 record), without column name.
Mysql: Pop out some unknown error and whatever configuration you change, can't get the original data.
Navicat:works fine.
test2:
e.g.: 8' , only 1 record with column name or have extra records without column name
Mysql: Can handle single quote properly.
Navicat: No problem.
test3:
If single quote exists, for most situation import wizard can't handle double quotes well.
e.g.: Single quote data comes before double quotes data.
Mysql: Fails totally.
Navicat: No problem.

Neo4j CSV Load: How to avoid Null and escape characters

I am trying to load large volume of data into graph using CSV Load script (xyx.cpl) and Neo4jShell.
Mostly it is doing well. Sometimes I am receiving following errors
Cannot merge node using null property value ...
Error related to escape characters
So, seeking assistance to understand the best way to handle this issues in import script.
Thanks in advance
Cannot merge node using null property value
You can use a WITH statement to filter out rows that have a null value for the property you are using in the MERGE. For example:
LOAD CSV WITH HEADERS FROM "file:///file.csv" AS row
WITH row WHERE row.name IS NOT NULL
MERGE (p:Person {name: row.name})
SET p.age = row.age
...
Error related to escape characters
Can you be a bit more specific about the error you are getting / show a Cypher and data example?
Without seeing your specific error / code here is some info that might help:
the character for string quotation within your CSV file is a double quote "
the escape character is \
more info and some examples here and here

Escape semicolon; MYSQL for Excel

I want to import data from an excel sheet into a MySQL database with the MySQL for Excel plugin. In some cells are texts with semicolons and I already figured out this causes a SQL error. I tried escaping the semicolons with backslash but I still get the error message. How can I escape the semicolon?
Kai,
this behaviour is purely the fault of MySQL for Excel, and seem to be a bug.
In the meantime, if you are not keen on changing your Excel data as suggested by others there is a workaround:
In your MySQL-for-Excel window click Options and then select Preview SQL statements before they are sent to the server and Accept.
Then proceed as normal with export / append data using the Add-in, but when a Review SQL script window appears, copy the contents into a different SQL tool (MySQL workbench, HeidiSQL, SQLWorkbench etc), and run. Then click cancel in the Mysql-for-Excel popups, and refresh the query if necessary.
Also: feel free to report the bug at: http://bugs.mysql.com/
Replace the semicolon with some unique text e.g. [SEMICOLON].
Next import the data to SQL and run something like
UPDATE your_table
SET your_field = REPLACE(your_field, '[SEMICOLON]', ';')
WHERE your_field LIKE '%[SEMICOLON]%'
I think all you need to do is consider the requirements Excel has when it imports data from CSV files (the parsing rules are probably the same or similar)
In your case, if a field contains any special characters, just quote the values with double quotes before importing the content in Excel.
So:
UPDATE table
SET field = '"' || field || '"'
WHERE field like '%,%'
The following rules should apply:
Fields containing a line-break, double-quote, and/or commas should be quoted
Any field may be quoted (with double quotes)
A (double) quote character in a field must be represented by two (double) quote characters.
More details: Wikipedia: Comma-separated values

Lose data in random fields when importing from file into table using phpmyadmin

I have an access DB. I exported tables to xlsx. Then I saved as .ods using openOffice
because I found out that phpmyadmin-mysql no longer supports excel files. I have my mySQL database formated exactly as it should to accept the data. I import and everything seems fine except one little detail.
In some fields, the value is NULL instead of the value it should have according to the .ods file. Some rows show the same value for that field correctly, some show NULL.
Also, the "faulty" rows have some fields that show the value 0 for fields that where empty in the imported file (instead of NULL). Default value for those fields in mySQL is NULL. Each row has many fields like that and all of the same data type (tinyint). Some appear correctly NULL and some have the value 0....
I can't see a pattern on all these.
Any help is appreciated.
Check to see that imported strings have ("") quotes and NULL do not and that all are separated appropriately, usually a "," comma with the record/row delimited by ";" semicolon. Best way to check what the MySQL is looking for is to export some existing data to the same format and check it against what you are trying to import. One little missed quote and the deal is off. Be consistent in the use of either double " quotes or single ' quotes. also the ` character is not used as I think. If you are "squishing" your data through an application that applies "smart quotes" like MS word does or "Open Office??' this too can cause issues. Add the word NULL either inside or without quotes in your csv import where values appropriate.

Using Excel to create a CSV file with special characters and then Importing it into a db using SSIS

Take this XLS file
I then save this XLS file as CSV and then open it up with a text editor. This is what I see:
Col1,Col2,Col3,Col4,Col5,Col6,Col7
1,ABC,"AB""C","D,E",F,03,"3,2"
I see that the double quote character in column C was stored as AB""C, the column value was enclosed with quotations and the double quote character in the data was replaced with 2 double quote characters to indicate that the quote is occurring within the data and not terminating the column value. I also see that the value for column G, 3,2, is enclosed in quotes so that it is clear that the comma occurs within the data rather than indicating a new column. So far, so good.
I am a little surprised that all of the column values are not enclosed by quotes but even this seems reasonable OK when I assume that EXCEL only specifies column delimieters when special characters like a commad or a dbl quote character exists in the data.
Now I try to use SQL Server to import the csv file. Note that I specify a double quote character as the Text Qualifier character.
And a command char as the Column delimiter character. However, note that SSIS imports column 3 incorrectly,eg, not translating the two consecutive double quote characters as a single occurence of a double quote character.
What do I have to do to get Excel and SSIS to get along?
Generally people avoid the issue by using column delimiter chactacters that are LESS LIKELY to occur in the data but this is not a real solution.
I find that if I modify the file from this
Col1,Col2,Col3,Col4,Col5,Col6,Col7
1,ABC,"AB""C","D,E",F,03,"3,2"
...to this:
Col1,Col2,Col3,Col4,Col5,Col6,Col7
1,ABC,"AB"C","D,E",F,03,"3,2"
i.e, removing the two consecutive quotes in column C's value, that the data is loaded properly, however, this is a little confusing to me. First of all, how does SSIS determine that the double quote between the B and the C is not terminating that column value? Is it because the following characters are not a comma column delimiter or a row delimiter (CRLF)? And why does Excel export it this way?
According to Wikipedia, here are a couple of traits of a CSV file:
Fields containing line breaks (CRLF), double quotes, and commas
should be enclosed in double-quotes. For example:
"aaa","b CRLF
bb","ccc" CRLF
zzz,yyy,xxx
If double-quotes are used to enclose fields, then a double-quote
appearing inside a field must be escaped by preceding it with
another double quote. For example:
"aaa","b""bb","ccc"
However, it looks like SSIS doesn't like it that way when importing. What can be done to get Excel to create a CSV file that could contain ANY special characters used as column delimiters, text delimiters or row delimiters in the data? There's no reason that it can't work using the approach specified in Wikipedia,. which is what I thought the old MS DTS packages used to do...
Update:
If I use Notepad change the input file to
Col1,Col2,Col3,Col4,Col5,Col6,Col7,Col8
"1","ABC","AB""C","D,E","F","03","3,2","AB""C"
Excel reads it just fine
but SSIS returns
The preview sample contains embedded text qualifiers ("). The flat file parser does not support embedding text qualifiers in data. Parsing columns that contain data with text qualifiers will fail at run time.
Conclusion:
Just like the error message says in your update...
The flat file parser does not support embedding text qualifiers in data. Parsing columns that contain data with text qualifiers will fail at run time.
Confirmed bug in Microsoft Connect. I encourage everyone reading this to click on this aforementioned link and place your vote to have them fix this stinker. This is in the top 10 of the most egregious bugs I have encountered.
Do you need to use a comma delimiter.
I used a pipe delimiter with no Text qualifier and it worked fine. Here is my output form the text file.
1|ABC|AB"C|D,E|F|03|3,2
You have 3 options in my opinion.
Read the data into a stage table.
Run any update queries you need on the columns
Now select your data from the stage table and output it to a flat file.
OR
Use pipes are you delimiters.
OR
Do all of this in a C# application and build it in code.
You could send the row to a script in SSIS and parse and build the file you want there as well.
Using text qualifiers and "character" delimited fields is problematic for sure.
Have Fun!