Attunity zero timestamp value insert into mySQL DB - mysql

we are using Attunity tool to insert data from mySQL Database into other mySQL Database and there is a problem with '0000-00-00 00:00:00' values in columns defined as a timestamp in source database.
Attunity task doesn`t return any error message, only run forever.
There is this sentence in a manual:
'If the DATETIME and TIMESTAMP data types are specified with a “zero” value (i.e. 0000-00-00), you need to make sure that the target database in the replication task supports "zero" values for the DATETIME and TIMESTAMP data types. If they are not supported, you can use a transformation to specify a supported value (e.g. 1970.) Otherwise, they will be recorded as null on the target.'
Neverthelessif I try to convert value using expression builder, by test expression function works, but by running the job same behavior, no error message just run doesnt finish and doesnt insert any values.
Tried following functions working in expression builder correctly:
replace($column_name, '0000-00-00 00:00:00','1000-01-01 00:00:00')
Expression builder supports SQLite functions.
CASE WHEN $column_name
= '0000-00-00 00:00:00'
THEN '1000-01-01 00:00:00' ELSE $column_name
END
It seems attunity tool first fail to insert data, just then performs the operation, and its too late afterwards.
Converting data type within attunity to string doesn`t help neither.
I run out of ideas what else to try.
Could you help?
Thank you

Related

Google Apps Script - MySQL data import using JDCB does not work with Date 0000-00-00 [duplicate]

I have a database table containing dates
(`date` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00').
I'm using MySQL. From the program sometimes data is passed without the date to the database. So, the date value is auto assigned to 0000-00-00 00:00:00
when the table data is called with the date column it gives error
...'0000-00-00 00:00:00' can not be represented as java.sql.Timestamp.......
I tried to pass null value to the date when inserting data, but it gets assign to the current time.
Is there any way I can get the ResultSet without changing the table structure?
You can use this JDBC URL directly in your data source configuration:
jdbc:mysql://yourserver:3306/yourdatabase?zeroDateTimeBehavior=convertToNull
Whether or not the "date" '0000-00-00" is a valid "date" is irrelevant to the question.
"Just change the database" is seldom a viable solution.
Facts:
MySQL allows a date with the value of zeros.
This "feature" enjoys widespread use with other languages.
So, if I "just change the database", thousands of lines of PHP code will break.
Java programmers need to accept the MySQL zero-date and they need to put a zero date back into the database, when other languages rely on this "feature".
A programmer connecting to MySQL needs to handle null and 0000-00-00 as well as valid dates. Changing 0000-00-00 to null is not a viable option, because then you can no longer determine if the date was expected to be 0000-00-00 for writing back to the database.
For 0000-00-00, I suggest checking the date value as a string, then changing it to ("y",1), or ("yyyy-MM-dd",0001-01-01), or into any invalid MySQL date (less than year 1000, iirc). MySQL has another "feature": low dates are automatically converted to 0000-00-00.
I realize my suggestion is a kludge. But so is MySQL's date handling.
And two kludges don't make it right. The fact of the matter is, many programmers will have to handle MySQL zero-dates forever.
Append the following statement to the JDBC-mysql protocol:
?zeroDateTimeBehavior=convertToNull&autoReconnect=true&characterEncoding=UTF-8&characterSetResults=UTF-8
for example:
jdbc:mysql://localhost/infra?zeroDateTimeBehavior=convertToNull&autoReconnect=true&characterEncoding=UTF-8&characterSetResults=UTF-8
Instead of using fake dates like 0000-00-00 00:00:00 or 0001-01-01 00:00:00 (the latter should be accepted as it is a valid date), change your database schema, to allow NULL values.
ALTER TABLE table_name MODIFY COLUMN date TIMESTAMP NULL
As an exteme turnaround, when you cannot do an alter to your date column or to update the values, or while these modifications take place, you can do a select using a case/when.
SELECT CASE ModificationDate WHEN '0000-00-00 00:00:00' THEN '1970-01-01 01:00:00' ELSE ModificationDate END AS ModificationDate FROM Project WHERE projectId=1;
you can try like This
ArrayList<String> dtlst = new ArrayList<String>();
String qry1 = "select dt_tracker from gs";
Statement prepst = conn.createStatement();
ResultSet rst = prepst.executeQuery(qry1);
while(rst.next())
{
String dt = "";
try
{
dt = rst.getDate("dt_tracker")+" "+rst.getTime("dt_tracker");
}
catch(Exception e)
{
dt = "0000-00-00 00:00:00";
}
dtlst.add(dt);
}
I wrestled with this problem and implemented the URL concatenation solution contributed by #Kushan in the accepted answer above. It worked in my local MySql instance. But when I deployed my Play/Scala app to Heroku it no longer would work. Heroku also concatenates several args to the DB URL that they provide users, and this solution, because of Heroku's use concatenation of "?" before their own set of args, will not work. However I found a different solution which seems to work equally well.
SET sql_mode = 'NO_ZERO_DATE';
I put this in my table descriptions and it solved the problem of
'0000-00-00 00:00:00' can not be represented as java.sql.Timestamp
There was no year 0000 and there is no month 00 or day 00. I suggest you try
0001-01-01 00:00:00
While a year 0 has been defined in some standards, it is more likely to be confusing than useful IMHO.
just cast the field as char
Eg: cast(updatedate) as char as updatedate
I know this is going to be a late answer, however here is the most correct answer.
In MySQL database, change your timestamp default value into CURRENT_TIMESTAMP. If you have old records with the fake value, you will have to manually fix them.
You can remove the "not null" property from your column in mysql table if not necessary. when you remove "not null" property no need for "0000-00-00 00:00:00" conversion and problem is gone.
At least worked for me.
I believe this is help full for who are getting this below Exception on to pumping data through logstash
Error: logstash.inputs.jdbc - Exception when executing JDBC query {:exception=>#}
Answer:jdbc:mysql://localhost:3306/database_name?zeroDateTimeBehavior=convertToNull"
or if you are working with mysql

Incorrect datetime value: '0000-00-00 00:00:00' for column

I have a table with 2M records and everything works fine until few hours ago.
suddenly it throw error on a query that previously works fine for more than 1 year.
The problem is that inserting a correct dateTime like 2019-07-15 22:22:47 into a timestamp column return error:
Incorrect datetime value: '0000-00-00 00:00:00' for column 'created_at' at row 1
1- I did duplicated table structure and query works fine on duplicated table
2- I did run yum update today.
3- OS: CentOS release 6.10 (Final)
4- MySql: Server version: 8.0.16 MySQL Community Server - GPL
Edit:
i have read other questions but its completely different, I've posted the answer
I found it, posting as answer that may help others
It seems that latest mysql update have some new roles added for comparing datetime values, however i think the error thrown is completely irrelevant.
i had a trigger on my table which check some parameter and also check if created_at column is equal to '0000-00-00 00:00:00' then change it to current_timestamp. part of trigger is
IF(NEW.created_at = '0000-00-00 00:00:00') THEN
SET NEW.created_at = current_timestamp();
END IF
it just a simple compare and the result should be true or false, and should not throw the zero date exception.
however i removed this part and everything is up now
Edit /etc/my.cnf and add line
sql-mode = ""
Save this file
Restart MySQL service
systemctl restart mysqld
Try your sql command,
example:
UPDATE TABLE test SET modified ='0000-00-00 00:00:00'

MySQL datetime field - can't write date

There is one table with datetime field set to allow nulls.
I am unable to enter any date format in this field as I tried this:
"2011-01-01 00:00:00"
"0000-00-00"
"21.01.2013"
and many others but al of the report error:
Microsoft Visual Studio
---------------------------
Invalid value for cell (row 1, column 3).
The changed value in this cell was not recognized as valid.
.Net Framework Data Type: MySqlDateTime
Error Message: Invalid cast from 'System.String' to 'MySql.Data.Types.MySqlDateTime'.
Type a value appropriate for the data type or press ESC to cancel the change.
What can I do ?
EDIT: Above problem occurs when I directly enter data in mysql table using VisualStudio - ServerExplorer. If I open MySQL workbench then there is no problem and I can write date in format "2011-01-01 00:00:00". Why VS treats Mysql DB differently than Mysql Workbench ?
And here is what I am trying to do from the code
Private Sub DataGridView1_CellValidating(sender As Object, e As DataGridViewCellValidatingEventArgs) Handles DataGridView1.CellValidating
Debug.Print(e.ToString)
If e.ColumnIndex = 3 Then
DataGridView.Rows(e.RowIndex).Cells(3).Value = "2011-01-01 00:00:00" 'entering this test value does not work
End If
End Sub
I had similar issue migrating from old code in Java. Code tried to save "2011-01-01 00:00:00" to DATE field in database. It worked when I removed time, and was saving date only as "2011-01-01".
Originally I went through that issue setting development environment under Windows. Given project was entirely developed under Linux and it worked fine with default instance of MySQL in Linux. By surprise it didn't work with default database instance under Windows. I tried to find out what can be different, but gave up, as changing code was easier, and worked fine.

Why does the Django time zone setting effect epoch time?

I have a small Django project that imports data dumps from MongoDB into MySQL. Inside these Mongo dumps are dates stored in epoch time. I would expect epoch time to be the same regardless of time zone but what I am seeing is that the Django TIME_ZONE setting has an effect on the data created in MySQL.
I have been testing my database output with the MySQL UNIX_TIMESTAMP function. If I insert a date with the epoch of 1371131402880 (this includes milliseconds) I have my timezone set to 'America/New_York', UNIX_TIMESTAMP gives me 1371131402, which is the same epoch time excluding milliseconds. However if I set my timezone to 'America/Chicago' I get 1371127802.
This is my code to convert the epoch times into Python datetime objects,
from datetime import datetime
from django.utils.timezone import utc
secs = float(epochtime) / 1000.0
dt = datetime.fromtimestamp(secs)
I tried to fix the issue by putting an explict timezone on the datetime object,
# epoch time is in UTC by default
dt = dt.replace(tzinfo=utc)
PythonFiddle for the code
I've tested this Python code in isolation and it gives me the expected results. However it does not give the correct results after inserting these object into MySQL through a Django model DateTimeField field.
Here is my MySQL query,
SELECT id, `date`, UNIX_TIMESTAMP(`date`) FROM table
I test this by comparing the unix timestamp column in the result of this query against the MongoDB JSON dumps to see if the epoch matches.
What exactly is going on here? Why should timezone have any effect on epoch times?
Just for reference, I am using Django 1.5.1 and MySQL-python 1.2.4. I also have the Django USE_TZ flag set to true.
I am no python or Django guru, so perhaps someone can answer better than me. But I will take a guess at it anyway.
You said that you were storing it in a Django DateTimeField, which according to the documents you referenced, stores it as a Python datetime.
Looking at the docs for datetime, I think the key is understanding the difference between "naive" and "aware" values.
And then researching further, I came across this excellent reference. Be sure the read the second section, "Naive and aware datetime objects". That gives a bit of context to how much of this is being controlled by Django. Basically, by setting USE_TZ = true, you are asking Django to use aware datetimes instead of naive ones.
So then I looked back at you question. You said you were doing the following:
dt = datetime.fromtimestamp(secs)
dt = dt.replace(tzinfo=utc)
Looking at the fromtimestamp function documentation, I found this bit of text:
If optional argument tz is None or not specified, the timestamp is converted to the platform’s local date and time, and the returned datetime object is naive.
So I think you could do this:
dt = datetime.fromtimestamp(secs, tz=utc)
Then again, right below that function, the docs show utcfromtimestamp function, so maybe it should be:
dt = datetime.utcfromtimestamp(secs)
I don't know enough about python to know if these are equivalent or not, but you could try and see if either makes a difference.
Hopefully one of these will make a difference. If not, please let me know. I'm intimately familiar with date/time in JavaScript and in .Net, but I'm always interested in how these nuances play out differently in other platforms, such as Python.
Update
Regarding the MySQL portion of the question, take a look at this fiddle.
CREATE TABLE foo (`date` DATETIME);
INSERT INTO foo (`date`) VALUES (FROM_UNIXTIME(1371131402));
SET TIME_ZONE="+00:00";
select `date`, UNIX_TIMESTAMP(`date`) from foo;
SET TIME_ZONE="+01:00";
select `date`, UNIX_TIMESTAMP(`date`) from foo;
Results:
DATE UNIX_TIMESTAMP(`DATE`)
June, 13 2013 13:50:02+0000 1371131402
June, 13 2013 13:50:02+0000 1371127802
It would seem that the behavior of UNIX_TIMESTAMP function is indeed affected by the MySQL TIME_ZONE setting. That's not so surprising, since it's in the documentation. What's surprising is that the string output of the datetime has the same UTC value regardless of the setting.
Here's what I think is happening. In the docs for the UNIX_TIMESTAMP function, it says:
date may be a DATE string, a DATETIME string, a TIMESTAMP, or a number in the format YYMMDD or YYYYMMDD.
Note that it doesn't say that it can be a DATETIME - it says it can be a DATETIME string. So I think the actual value being implicitly converted to a string before being passed into the function.
So now look at this updated fiddle that converts explicitly.
SET TIME_ZONE="+00:00";
select `date`, convert(`date`, char), UNIX_TIMESTAMP(convert(`date`, char)) from foo;
SET TIME_ZONE="+01:00";
select `date`, convert(`date`, char), UNIX_TIMESTAMP(convert(`date`, char)) from foo;
Results:
DATE CONVERT(`DATE`, CHAR) UNIX_TIMESTAMP(CONVERT(`DATE`, CHAR))
June, 13 2013 13:50:02+0000 2013-06-13 13:50:02 1371131402
June, 13 2013 13:50:02+0000 2013-06-13 13:50:02 1371127802
You can see that when it converts to character data, it strips away the offset. So of course, it makes sense now that when UNIX_TIMESTAMP takes this value as input, it is assuming the local time zone setting and thus getting a different UTC timestamp.
Not sure if this will help you or not. You need to dig more into exactly how Django is calling MySQL for both the read and the write. Does it actually use the UNIX_TIMESTAMP function? Or was that just what you did in testing?

How do you properly update a mysql field with NULL?

How does one properly update a mysql field with a NULL value when using a variable in the sql query?
I have a variable called $timestamp. When it's set to date( Y-m-d h:i:s ) I have to wrap it in quotes because I'm passing a string in my mysql query. When $timestamp is set to NULL, the database query contains '' as the value for $timestamp and the field updates to 0000-00-00 00:00:00. It's important to keep this field as NULL to show that the process has never been run before.
I don't want to use now() because then my sql statement is not in sync with my class variable $timestamp.
I don't want to set $timestamp to 'NULL' because then that variable is not accurate. It's no longer NULL, it's set to a string that contains the word NULL.
What am I missing here?
The correct SQL syntax to set a column to NULL is:
UPDATE Table SET Column = NULL WHERE . . .
(note the lack of quotes around the literal NULL).
Are you performing this UPDATE using SQL or using some kind of framework? If a framework, it should recognize NULL values and pass them to the database correctly for you.
After a lot of research, I've found that this is a well known problem with no good solution if you are writing your sql queries outright.
The correct solution is to use a database abstraction layer like PDO ( for PHP ), or Active Record ( used in frameworks like Codeignitor and Ruby on Rails ).