Using AppEngine/BigQuery. Timestamp has stopped parsing.
Here is my Schema:
[
{"name":"RowID","type":"string"},
{"name":"Timestamp","type":"timestamp"},
{"name":"Keyword","type":"string"},
{"name":"Engine","type":"string"},
{"name":"Locale","type":"string"},
{"name":"Geo","type":"string"},
{"name":"Device","type":"string"},
{"name":"Metrics","type":"record", "fields":[
{"name":"GlobalSearchVolume","type":"integer"},
{"name":"CPC","type":"float"},
{"name":"Competition","type":"float"}
]}
]
and here is a JSON row that is being shipped to BQ for this schema:
{
"RowID":"6263121748743343555",
"Timestamp":"2015-01-13T07:04:05.999999999Z",
"Keyword":"buy laptop",
"Engine":"google",
"Locale":"en_us",
"Geo":"",
"Device":"d",
"Metrics":{
"GlobalSearchVolume":3600,
"CPC":7.079999923706055,
"Competition":1
}
}
This data is accepted by BigQuery, but the timestamp is nil (1970-01-01 00:00:00 UTC) as seen here:
I have also tried sending through the UNIX timestamp, to no avail. Can you see any errors with my schema or input data that would cause the timestamp to not parse?
I had a similar issue, but I was just checking the details in the preview window. When I actually ran any queries, the timestamps worked correctly. It often took 24 hours for the details to update the timestamps to the actual values.
Related
I have a database table containing dates
(`date` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00').
I'm using MySQL. From the program sometimes data is passed without the date to the database. So, the date value is auto assigned to 0000-00-00 00:00:00
when the table data is called with the date column it gives error
...'0000-00-00 00:00:00' can not be represented as java.sql.Timestamp.......
I tried to pass null value to the date when inserting data, but it gets assign to the current time.
Is there any way I can get the ResultSet without changing the table structure?
You can use this JDBC URL directly in your data source configuration:
jdbc:mysql://yourserver:3306/yourdatabase?zeroDateTimeBehavior=convertToNull
Whether or not the "date" '0000-00-00" is a valid "date" is irrelevant to the question.
"Just change the database" is seldom a viable solution.
Facts:
MySQL allows a date with the value of zeros.
This "feature" enjoys widespread use with other languages.
So, if I "just change the database", thousands of lines of PHP code will break.
Java programmers need to accept the MySQL zero-date and they need to put a zero date back into the database, when other languages rely on this "feature".
A programmer connecting to MySQL needs to handle null and 0000-00-00 as well as valid dates. Changing 0000-00-00 to null is not a viable option, because then you can no longer determine if the date was expected to be 0000-00-00 for writing back to the database.
For 0000-00-00, I suggest checking the date value as a string, then changing it to ("y",1), or ("yyyy-MM-dd",0001-01-01), or into any invalid MySQL date (less than year 1000, iirc). MySQL has another "feature": low dates are automatically converted to 0000-00-00.
I realize my suggestion is a kludge. But so is MySQL's date handling.
And two kludges don't make it right. The fact of the matter is, many programmers will have to handle MySQL zero-dates forever.
Append the following statement to the JDBC-mysql protocol:
?zeroDateTimeBehavior=convertToNull&autoReconnect=true&characterEncoding=UTF-8&characterSetResults=UTF-8
for example:
jdbc:mysql://localhost/infra?zeroDateTimeBehavior=convertToNull&autoReconnect=true&characterEncoding=UTF-8&characterSetResults=UTF-8
Instead of using fake dates like 0000-00-00 00:00:00 or 0001-01-01 00:00:00 (the latter should be accepted as it is a valid date), change your database schema, to allow NULL values.
ALTER TABLE table_name MODIFY COLUMN date TIMESTAMP NULL
As an exteme turnaround, when you cannot do an alter to your date column or to update the values, or while these modifications take place, you can do a select using a case/when.
SELECT CASE ModificationDate WHEN '0000-00-00 00:00:00' THEN '1970-01-01 01:00:00' ELSE ModificationDate END AS ModificationDate FROM Project WHERE projectId=1;
you can try like This
ArrayList<String> dtlst = new ArrayList<String>();
String qry1 = "select dt_tracker from gs";
Statement prepst = conn.createStatement();
ResultSet rst = prepst.executeQuery(qry1);
while(rst.next())
{
String dt = "";
try
{
dt = rst.getDate("dt_tracker")+" "+rst.getTime("dt_tracker");
}
catch(Exception e)
{
dt = "0000-00-00 00:00:00";
}
dtlst.add(dt);
}
I wrestled with this problem and implemented the URL concatenation solution contributed by #Kushan in the accepted answer above. It worked in my local MySql instance. But when I deployed my Play/Scala app to Heroku it no longer would work. Heroku also concatenates several args to the DB URL that they provide users, and this solution, because of Heroku's use concatenation of "?" before their own set of args, will not work. However I found a different solution which seems to work equally well.
SET sql_mode = 'NO_ZERO_DATE';
I put this in my table descriptions and it solved the problem of
'0000-00-00 00:00:00' can not be represented as java.sql.Timestamp
There was no year 0000 and there is no month 00 or day 00. I suggest you try
0001-01-01 00:00:00
While a year 0 has been defined in some standards, it is more likely to be confusing than useful IMHO.
just cast the field as char
Eg: cast(updatedate) as char as updatedate
I know this is going to be a late answer, however here is the most correct answer.
In MySQL database, change your timestamp default value into CURRENT_TIMESTAMP. If you have old records with the fake value, you will have to manually fix them.
You can remove the "not null" property from your column in mysql table if not necessary. when you remove "not null" property no need for "0000-00-00 00:00:00" conversion and problem is gone.
At least worked for me.
I believe this is help full for who are getting this below Exception on to pumping data through logstash
Error: logstash.inputs.jdbc - Exception when executing JDBC query {:exception=>#}
Answer:jdbc:mysql://localhost:3306/database_name?zeroDateTimeBehavior=convertToNull"
or if you are working with mysql
I developed node js services with Sequelize with MySQL Everything working good. I am storing data in DB with date and time with service but if I get the same data with service, node js automatically converting time (a wrong format which is not in the database), I want same time which is present in the database. to solve this issue I set the timezone in services and in the database but none of them is working. can anyone provide me with the solution.
suggestion
you could include time format function in mysql
for example
SELECT TIME_FORMAT("19:30:10", "%H %i %s");
result
TIME_FORMAT("19:30:10", "%H %i %s")
19 30 10
ref
That's not the issue of node js , Sequelize by default gives the time in UTC , so that date time will be correct
Check : Just compare your date time with UTC , and check is that you are getting.
Solution : So what you can do is , just convert returned date and time to your timezone and you are good to go.
And that's a good practice if you are using UTC everywhere , you can get benefits when you are getting / showing in many countries.
I have to upload a JSON load into the big query. While uploading the load I got the below-mentioned error.
I debug and found that it is failing on this JSON record which seems to be valid.
{"firebaseUid":"00FKNF7x2BQhDoPk9TSzE4Ncepn1","age_range":{"min":21},"signUpApp":"stationApp","uid":"00FKNF7x2BQhDoPk9TSzE4Ncepn1","locale":"en_US","emailSha256":"501a8456ececb2a50e733eed6c64b840d63d3aad99fb9ad4a1bbd2cbc33fc1f6","loginMethod":"facebook","notificationToken":"dummy","ageRangeMin":21,"pushNotificationEnabled":true,"projectId":"triplembaas","createDate":"13/07/2018","state":"QLD","station":"TripleM 104.5","facebookId":"1021TheHotBreakfast740157586","email":"connollyharley#gmail.com","cellularNetwork":"OPTUS","suburb":"Bellara","idfa":"60A63734A27E40249331658F1AC670A1","deviceId":"BBD901JaseJuelz454E100000000000000000","firstSignUpDate":"13/07/2018","name":"Harley Connolly","gender":"male","emailVerificationFlag":false,"lastUpdateDate":"20/07/2018","link":"dummy"}
Error while reading data, error message: JSON table encountered too many errors, giving up. Rows: 1; errors: 1. Please look into the error stream for more details.
Probably I am late but for anyone still looking for answer try this.
Change your date format to "YYYY-MM-DD". Somehow bigquery detects that the field value is Date and it won't allow any other format of date instead of "YYYY-MM-DD"
I need your help in this issue,
I have a talend job which load data from a table to another with a simple tmap.
I called it a mysterious error because it happended just for a specific datetimes
java.sql.SQLException: Could not parse column as timestamp, was: "2009-06-01 00:00:00"
Thousands of rows before the row containing this line doesn't generate the error
When I modify this date 2009-06-01 00:00:00 to another or just changing the day part or the month or even the hour, It goes without error.
the datasource is a mariadb and the destination is a Mysql database
thnks for your help
and this is the part of code which contain the error generated
if (colQtyInRs_tMysqlInput_5 < 6) {
row5.created_at = null;
} else {
if (rs_tMysqlInput_5.getString(6) != null) {
String dateString_tMysqlInput_5 = rs_tMysqlInput_5
.getString(6);
if (!("0000-00-00")
.equals(dateString_tMysqlInput_5)
&& !("0000-00-00 00:00:00")
.equals(dateString_tMysqlInput_5)) {
row5.created_at = rs_tMysqlInput_5
.getTimestamp(6);
} else {
row5.created_at = (java.util.Date) year0_tMysqlInput_5
.clone();
}
} else {
row5.created_at = null;
}
}
Since you provided no further information in
How the source data looks like, e.g. is it a date field or a string field in the source?
Why parsing would happen, this seems to be connected to the source data being a string
How the parsing pattern looks like
I am going to assume a bit here.
1st: I assume you provide a string in the source. Since this is the case, you'd need to make sure that the date in the column is always formatted the same way. Also, you'd need to show us the timestamp format for parsing.
2nd: You said you'd need to change the values of the date for it to work. This seems to me to be an issue with parsing, so for example you have switched by accident the month and day field, e.g. yyyy-dd-mm HH:mm:ss or something alike. Again, this depends on your parsing string.
Since there is often a bit of confusion about this I created a blog post for date handling in Talend which you could consult as well.
This error is due to Timezone, after trying many solutions, I thought about changing the timezone because My Laptop is in UTC and the database timezone is UTC+01 so Talend generate this error in local environment.
Hope it will help someone else
I have Cassandra DB with data that has TTL of X hour's for every column value and this needs to be pushed to ElasticSearch Cluster real time.
I have seen past posts on StackOverflow that advise using tools such as LogStash or pushing data directly from application layer.
However, How can one preserve the TTL of the data imported once the data is copied in ES Version >=5.0?
There was once a field called _ttlwhich has been deprecated in ES 2.0 and removed in ES 5.0.
As of ES 5, there are now two official ways of preserving the TTL of your data. First make sure to create a TTL field in your ES documents that would be set to the creation date of your row in Cassandra + the TTL seconds. So if in Cassandra you have a record like this:
INSERT INTO keyspace.table (userid, creation_date, name)
VALUES (3715e600-2eb0-11e2-81c1-0800200c9a66, '2017-05-24', 'Mary')
USING TTL 86400;
Then in ES, you should export the following document to ES:
{
"userid": "3715e600-2eb0-11e2-81c1-0800200c9a66",
"name": "mary",
"creation_date": "2017-05-24T00:00:00.000Z",
"ttl_date": "2017-05-25T00:00:00.000Z"
}
Then you can either:
A. Use a cron that will regularly perform a delete by query based on one of your ttl_date field, i.e. call the following command from your cron:
curl -XPOST localhost:9200/your_index/_delete_by_query -d '{
"query": {
"range": {
"ttl_date": {
"lt": "now"
}
}
}
}'
B. Or use time-based indices and insert each document in an index matching it's ttl_date field. For instance, the above document would be inserted in the index named your_index-2017-05-25. Then with the curator tool you can easily delete indices that have expired.