sqlldr : ORA-00911: invalid character - sql-loader

i want to import csv file. my script is :
#echo off
set numid=2015092510524361378197540100
sqlldr USER#db/PSW data=csv\2015092510524361378197540100.csv control=ctl\control.ctl log=log\2015092510524361378197540100.log bad=bad\id.bad
pause
my table is :
CREATE TABLE SV (NO1 VARCHAR2(255),NAMA VARCHAR2(255),ALAMAT VARCHAR2(255),id VARCHAR2(20),JAB VARCHAR2(50),numid VARCHAR(55));
my control.ctl is :
OPTIONS (SKIP=43, errors=12000) LOAD DATA APPEND INTO TABLE sv when NAMA <> '' FIELDS TERMINATED BY ',' optionally enclosed by '"' TRAILING NULLCOLS (no filler,no1 "TRIM (:no1)",nama "TRIM (:nama)", alamat "TRIM (:alamat)",id "TRIM (:id)",jab "TRIM (:jab)",numid "%numid%")
error is :
Record 10: Rejected - Error on table sV, column numid.
ORA-00911: invalid character
please let me know which one is wrong. thanks All

Since numid is a VARCHAR2(55), change the table datatype to be VARCHAR2 and the control file to read:
numid "TRIM(:numid)"

Related

Load data to Salesforce using COPY INTO

I have been trying to load csv data into Snowflake using COPY INTO command
This is the sample data
4513194677~"DELL - ULTRASHARP 32\" MONITOR 4K U3223QE"~""~""
I have tried using below COPY INTO syntax
file_format =
type = 'csv'
field_delimiter = '~'
skip_header = 1
record_delimiter = '\\n'
field_optionally_enclosed_by = '"'
ESCAPE = 'NONE'
ESCAPE_UNENCLOSED_FIELD = 'NONE'
However, getting this error "Found character 'M' instead of field delimiter '~'"
How can I escape the " and load the columns data as DELL - ULTRASHARP 32 " MONITOR 4K U3223QE
If I try to use ESCAPE, I get below error when running the COPY command
[ERROR] ProgrammingError: 001003 (42000): 01a8e01d-3201-36a9-0050-4502537cfc7f: SQL compilation error:
syntax error line 15 at position 43 unexpected '''.
syntax error line 20 at position 20 unexpected ')'.
file_format =
type = 'csv'
field_delimiter = '~'
skip_header = 1
record_delimiter = '\\n'
field_optionally_enclosed_by = '"'
ESCAPE = '\\'
ESCAPE_UNENCLOSED_FIELD = '\\'
Try using two double quotes in the data instead of one without trying to escape the double quote
Data similar to "sample"
You can have your csv formated like below
"Data similar to ""sample"""

MySQL load file not working as expected

I have this data in my file (sometimes fields could be in double quotes) as shown below:
1, "ala", johnes, 2017-09-01, 100
2, dqwdqdq,, 2017-09-01, 101
Data is loaded to database however quotes are also inserted into column so instead to see ala i see "ala" in the column.
My code:
Using exclecon As New MySqlConnection("Server=Localhost;DataBase=mobily;user=root;password=897;")
Dim uploadQry As String = "Load DATA LOCAL INFILE 'C:/Users/Robert/Desktop/test.csv' INTO TABLE Test FIELDS TERMINATED BY ',' ENCLOSED BY '""' ESCAPED BY '' LINES TERMINATED BY '\r\n' IGNORE 0 LINES;"
Dim myCUpload As New MySqlCommand(uploadQry, exclecon)
exclecon.Open()
myCUpload.ExecuteNonQuery()
End Using

postgresql json to columns error Character with value must be escaped

I try to load some data from a table containing json rows.
There is one field that can contain special chars as \t and \r, and I want to keep them as is in the new table.
Here is my file:
{"text_sample": "this is a\tsimple test", "number_sample": 4}
Here is what I do:
Drop table if exists temp_json;
Drop table if exists test;
create temporary table temp_json (values text);
copy temp_json from '/path/to/file';
create table test as (select
(values->>'text_sample') as text_sample,
(values->>'number_sample') as number_sample
from (
select replace(values,'\','\\')::json as values
from temp_json
) a);
I keep getting this error:
ERROR: invalid input syntax for type json
DETAIL: Character with value 0x09 must be escaped.
CONTEXT: JSON data, line 1: ...g] Objection to PDDRP Mediation (was Re: Call for...
How do I need to escape those characters?
Thanks a lot
Copy the file as csv with a different quoting character and delimiter:
drop table if exists test;
create table test (values jsonb);
\copy test from '/path/to/file.csv' with (format csv, quote '|', delimiter ';');
select values ->> 'text_sample', values ->> 'number_sample'
from test;
?column? | ?column?
-----------------------------+----------
this is a simple test | 4
As mentioned in Andrew Dunstan's PostgreSQL and Technical blog
In text mode, COPY will be simply defeated by the presence of a backslash in the JSON. So, for example, any field that contains an embedded double quote mark, or an embedded newline, or anything else that needs escaping according to the JSON spec, will cause failure. And in text mode you have very little control over how it works - you can't, for example, specify a different ESCAPE character. So text mode simply won't work.
so we have to turn around to the CSV format mode.
copy the_table(jsonfield)
from '/path/to/jsondata'
csv quote e'\x01' delimiter e'\x02';
In the official document sql-copy, some Parameters list here:
COPY table_name [ ( column_name [, ...] ) ]
FROM { 'filename' | PROGRAM 'command' | STDIN }
[ [ WITH ] ( option [, ...] ) ]
[ WHERE condition ]
where option can be one of:
FORMAT format_name
FREEZE [ boolean ]
DELIMITER 'delimiter_character'
NULL 'null_string'
HEADER [ boolean ]
QUOTE 'quote_character'
ESCAPE 'escape_character'
FORCE_QUOTE { ( column_name [, ...] ) | * }
FORCE_NOT_NULL ( column_name [, ...] )
FORCE_NULL ( column_name [, ...] )
ENCODING 'encoding_name'
FORMAT
Selects the data format to be read or written: text, csv (Comma Separated Values), or binary. The default is text.
QUOTE
Specifies the quoting character to be used when a data value is quoted. The default is double-quote. This must be a single one-byte character. This option is allowed only when using CSV format.
DELIMITER
Specifies the character that separates columns within each row (line) of the file. The default is a tab character in text format, a comma in CSV format. This must be a single one-byte character. This option is not allowed when using binary format.
NULL
Specifies the string that represents a null value. The default is \N (backslash-N) in text format, and an unquoted empty string in CSV format. You might prefer an empty string even in text format for cases where you don't want to distinguish nulls from empty strings. This option is not allowed when using binary format.
HEADER
Specifies that the file contains a header line with the names of each column in the file. On output, the first line contains the column names from the table, and on input, the first line is ignored. This option is allowed only when using CSV format.
cast json as text, instead of getting text value from json. Eg:
t=# with j as (
select '{"text_sample": "this is a\tsimple test", "number_sample": 4}'::json v
)
select v->>'text_sample' your, (v->'text_sample')::text better
from j;
your | better
-----------------------------+--------------------------
this is a simple test | "this is a\tsimple test"
(1 row)
and to avoid 0x09 error, try using
replace(values,chr(9),'\t')
as in your example you replace backslash+t, not the actual chr(9)...

Error Code: 1193 unknown system variable when importing a CSV

I'm trying to import data from a .csv file and I'm getting and error code 1193 unknown system variable. I'm utilizing MySQL 5.5.34.
LOAD DATA LOCAL INFILE 'path to the file/student_2.csv'
INTO TABLE STUDENT
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 2 LINES
(S_ID, S_LAST, S_FIRST, S_MI, S_ADDRESS, S_CITY, S_STATE, S_ZIP, S_PHONE, S_CLASS, #S_DOB, S_PIN, F_ID, #DATE_ENROLLED);
SET S_DOB = STR_TO_DATE(#S_DOB, '%m/%d/%Y'),
DATE_ENROLLED = STR_TO_DATE(#DATE_ENROLLED, '%m/%d/%Y');
The csv file's data is as follows:
S_ID,S_LAST,S_FIRST,S_MI,S_ADDRESS,S_CITY,S_STATE,S_ZIP,S_PHONE,S_CLASS,S_DOB,S_PIN,F_ID,DATE_ENROLLED
Number,String,String,String,String,String,String,String,String,String,Date/Time,String,Number,String
1,Joffs,Tami,R,1817 Eagldge Cle,Houston,TX,74027,356487654,SR,7/14/88,8891,1,1/3/13
2,Petez,Jimmge,C,951 Drainbow Place,Absail,TX,76901,3253945432,SR,18/09/76,1230,1,11/10/02
3,Marks,Johannes,A,1015 Wild St,Dallas,TX,71012,3251454321,JR,08/13/83,1613,1,8/24/03
4,Smyth,Mark,,428 EN 16 Plaza,Arsehole,TX,7012,3221143210,SO,1/14/88,1841,2,8/23/04
I also change the year format from %Y to %y and did not work either.
It is something wrong with the script?
Hm - I can't try it out and I did not dive deep into your script, but are you sure about the ; before the set-commands?
....
(S_ID, S_LAST, S_FIRST, S_MI, S_ADDRESS, S_CITY,
S_STATE, S_ZIP, S_PHONE, S_CLASS, #S_DOB, S_PIN, F_ID, #DATE_ENROLLED);
SET S_DOB = STR_TO_DATE(#S_DOB, '%m/%d/%Y'),
DATE_ENROLLED = STR_TO_DATE(#DATE_ENROLLED, '%m/%d/%Y');

HIVE escaped by not working '\\'

I have a data-set in S3
123, "some random, text", "", "", 236
I build a external table on this dataset :
CREATE EXTERNAL TABLE db1.myData(
field1 bigint,
field2 string,
field3 string,
field4 string,
field5 bigint,
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
ESCAPED BY '\\'
LOCATION 's3n://thisMyData/';
Problem/ Issue :
when I do
select * from db1.myData
field2 is shown as
some random
I need the field to be
some random, text
Gotcha's :
1. I cannot change the delimiter as there are over ~300 .csv files at this location
2. ESCAPED BY is not escaping the '\\'
3. I'm using HIVE 0.13 so there I cannot use CSV SerDe and neither i'm allowed to import new jars to cluster (its a complicated process to add a new jar as I have to go through Director level approvals)
Question:
Is there a workaround for making 'ESCAPED BY' come alive ?!
Any other workarounds for this ??
All suggestions are welcome !!
N.B : THis is not a repeat question. If you think its a repeat, please guide me to right page and I will take this off of this portal :)
I had to use: ESCAPED BY '\134' which translates to: ESCAPED BY '\'.
Additionally, because I was calling the Athena create table statement by passing in the statement from a JSON file I had to add an extra \ to mask the original \ in JSON. So my final statement within the JSON file looked like this: ESCAPED BY '\\134'.
If you are using Hive 0.14, you can use CSV Serde like this:
CREATE TABLE my_table(a string, b string, ...)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
"separatorChar" = "\t",
"quoteChar" = "'",
"escapeChar" = "\\"
)
STORED AS TEXTFILE;
Refer below link for details:
https://cwiki.apache.org/confluence/display/Hive/CSV+Serde