I am working on moving data from MySQL to Oracle. The MySQL input datasets have been provided via a MySQL data dump. Null values in the MySQL database were written as "\N" (without the quotes) in the output file.
I am using sqlldr to get the data into Oracle and "\N" values are problematic in columns mapped to NUMBER data type because Oracle thinks they are strings.
How do I tell sqlldr that any \N values in the input dataset should be mapped to Nulls in Oracle?
Thanks.
This is what worked for me. Note that if you are on unix-based systems, the \N will need to be escaped as follows:
...
COLUMN_NM CHAR(4000) NULLIF COLUMN_NM='\\N',
...
You can use NULLIF in the control file. It will assign null if finds \N in that column. See syntax below.
<COLUMN_NUMBER> NULLIF <COLUMN_NUMBER> = '\\N'
Related
I am migrating a MySQL 5.5 physical host database to a MySQL 5.6 AWS Aurora database. I noticed that when data is written to a file using INTO OUTFILE, 5.5 writes NULL value as '\N' and empty string as ''. However, 5.6 writes both empty string and NULL as ''.
Query
SELECT * FROM $databasename.$tablename INTO OUTFILE $filename CHARACTER SET utf8 FIELDS ESCAPED BY '\\\\' TERMINATED BY $delimiter;
I found official documents about this:
https://dev.mysql.com/doc/refman/5.6/en/load-data.html
With fixed-row format (which is used when FIELDS TERMINATED BY and
FIELDS ENCLOSED BY are both empty), NULL is written as an empty
string. This causes both NULL values and empty strings in the table to
be indistinguishable when written to the file because both are written
as empty strings. If you need to be able to tell the two apart when
reading the file back in, you should not use fixed-row format.
How do I export NULL as '\N'?
How do I export NULL as '\N'?
First of all that's strange and why you want to do that? But if for some reason you want to export it that way then you will have to change your query from select * to using a CASE expression like
select
case when col1 is null then '\\N' else col1 end as col1,
...
from $databasename.$tablename....
As commented you can as well use IFNULL() function or COALESCE() function for the same purpose.
I'm building an AWS pipeline to insert CSV files from S3 to an RDS MySQL DB. The problem I'm facing is that when it attempts to load the file, it treats blanks as empty strings instead of NULLs. For example, Line 1 of the CSV is:
"3","John","Doe",""
Where the value is an integer in the MySQL table, and of course the error in the pipeline is:
Incorrect integer value: '' for column 'col4' at row 1
I was researching the jdbc MySQL paramaters to modify the connection string:
jdbc:mysql://my-rds-endpoint:3306/my_db_name?
jdbcCompliantTruncation=false
jdbcCompliantTruncationis is just an example, is there any of these parameters that can help me insert those blanks as nulls?
Thanks!
EDIT:
A little context, the CSV files are UNLOADS from redshift, so the blanks are originally NULLs when I put them in S3.
the csv files are UNLOADS from redshift
Then look at the documentation for the Redshift UNLOAD command and add the NULL AS option. For example:
NULL AS 'NULL'
use null as '\N' converts blank to null
unload ('SELECT * FROM table')
to 's3://path' credentials
'aws_access_key_id=sdfsdhgfdsjfhgdsjfhgdsjfh;aws_secret_access_key=dsjfhsdjkfhsdjfksdhjkfsdhfjkdshfs'
delimiter '|' null as '\\N' ;
I resolve this issue using the NULLIF function:
insert into table values (NULLIF(?,''),NULLIF(?,''),NULLIF(?,''),NULLIF(?,''))
Have to move a table from MS SQL Server to MySQL (~ 8M rows with 8 coloumns). One of the coloumns (DECIMAL Type) is exported as empty string with "bcp" export to a csv file. When I'm using this csv file to load data into MySQL table, it fails saying "Incorrect decimal value".
Looking for possible work arounds or suggestions.
I would create a view in MS SQL which converts the decimal column to a varchar column:
CREATE VIEW MySQLExport AS
SELECT [...]
COALESCE(CAST(DecimalColumn AS VARCHAR(50)),'') AS DecimalColumn
FROM SourceTable;
Then, import into a staging table in MySQL, and use a CASE statement for the final INSERT:
INSERT INTO DestinationTable ([...])
SELECT [...]
CASE DecimalColumn
WHEN '' THEN NULL
ELSE CAST(DecimalColumn AS DECIMAL(10,5))
END AS DecimalColumn,
[...]
FROM ImportMSSQLStagingTable;
This is safe because the only way the value can be an empty string in the export file is if it's NULL.
Note that I doubt you can cheat by exporting it with COALESCE(CAST(DecimalColumn AS VARCHAR(50)),'\N'), because LOAD INFILE would see that as '\N', which is not the same as \N.
I have an access DB. I exported tables to xlsx. Then I saved as .ods using openOffice
because I found out that phpmyadmin-mysql no longer supports excel files. I have my mySQL database formated exactly as it should to accept the data. I import and everything seems fine except one little detail.
In some fields, the value is NULL instead of the value it should have according to the .ods file. Some rows show the same value for that field correctly, some show NULL.
Also, the "faulty" rows have some fields that show the value 0 for fields that where empty in the imported file (instead of NULL). Default value for those fields in mySQL is NULL. Each row has many fields like that and all of the same data type (tinyint). Some appear correctly NULL and some have the value 0....
I can't see a pattern on all these.
Any help is appreciated.
Check to see that imported strings have ("") quotes and NULL do not and that all are separated appropriately, usually a "," comma with the record/row delimited by ";" semicolon. Best way to check what the MySQL is looking for is to export some existing data to the same format and check it against what you are trying to import. One little missed quote and the deal is off. Be consistent in the use of either double " quotes or single ' quotes. also the ` character is not used as I think. If you are "squishing" your data through an application that applies "smart quotes" like MS word does or "Open Office??' this too can cause issues. Add the word NULL either inside or without quotes in your csv import where values appropriate.
I am creating SSIS package which reads data from a csv file and stores in SQL Server database. There are a few numeric fields in the csv files.
They sometimes contain value like "1,008.54"
How do I remove the quotes and comma from the value?
I have successfully separated the rows with this kind of data by using Conditional Split Transformation.
(SUBSTRING([Column 9],1,1) == "\"")
After this, I tried using Derived Column Transformation to REPLACE comma and quotes with empty string. But it is not working.
Please advise.
Thanks!
I tested your sample value "1,008.54" in a Data Flow where my source was:
SELECT '"1,008.54"' AS [Column]
I then placed the following expression in a Derived Column Transformation (imitating what you did attempted)
REPLACE(REPLACE(Column,",",""),"\"","")
and successfully achieved your request: Using Derived Column Transformation, REPLACE comma and quotes with empty string.
Here's the result as presented by a data viewer after the Derived Column Transformation:
Column Derived Column 1
"1,008.54" 1008.54