mysql data sync change timezone - mysql

I have two data environments 1) a data source 2) a production database powering a website. These two data environments are in two different timezones.
I am updating my production database incrementally by using
1. mysqldump - for syncing newly added records
2. sqlyog sja - for syncing updated records.
I have a column named modified_time (modified_time timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP) in each table to store the last modified time.
While syncing this data between two timezones I am not able to change the timezone.
I wanted to know how can I change the source timezone to target timezone while syncing

This is not possible at the db level and even if it were to be possible it would be inefficient, I would say deal with it in your application, its simple, all data is in a different timezone, so you just need to change it by a constant to get your time.
Again if the source data is using UTC (which is recommended) then you dont have any issue at all.

Related

AWS DMS - Microsecond precision for CDC on MYSQL as source EndPoint

I am using AWS DMS for migrating data from MYSQL as source endpoint and S3 as target endpoint.
I want to track the updates from source so during the configuration, I have enabled TimestampColumnName property (col name : event_timestamp).
In the result (listed below), I am getting the timestamp of records/events but NOT the micro-second precision.
I want microsecond precision to build sequence logic on top of that.
I have investigated the property of source endpoint as well as target but not getting desired result. Here is the sample output :
.
Can somebody take a look and suggest if I am missing any property.
Output format: for my file in S3 is parquet.
Unfortunately DATETIME column added by AWS DMS S3 TimestampColumnName for change data capture (CDC) load from MySQL source will have only second precision.
Because transaction timestamp in MySQL binary log has only seconds.
Simplest solution is to add to MySQL table new column - timestamp with microsecond precision with default value to be set on insert or / and update automatically and use this column as event_timestamp.
ts TIMESTAMP(6) DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
Also, check that in AWS DMS to S3 setting ParquetTimestampInMillisecond is False (or not present / unset, false is default).
AWS DMS S3 TimestampColumnName setting adds a column with timestamp to the output.
In 'static' read - it will generate current timestamp:
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For CDC it will read transaction time from database transaction log:
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
And its precision will be the one of the timestamp in database transaction log:
...the rounding of the precision depends on the commit timestamp supported by DMS for the source database.
CDC mode is essentially replication. Source database should be configured appropriately to write such transaction log. Database writes to this log transaction info along with transaction / commit timestamp.
In case of MySQL this is the binary log. And MySQL binlog timestamp is only 32 bit - just seconds.
Also, this transaction timestamp may not always be in line with the actual order of transactions or the order changes were actually committed at (link 1, link 2).
This question is over a year old but I faced the same/similar issue and thought I'd explain how I solved it in case it could help others.
I have tables in RDS and am using DMS to migrate them from RDS to S3. In the DMS task settings, I enabled the timestamp column and parquet file format. I want to use the CDC files that get stored in S3 to upsert into my datalake. So in order to do that, I needed to deduplicate the rows by getting the latest action upon a specific record in the RDS table. But just like the problem you faced, I noticed that the the timestamp column did not have a high precision so selecting rows with the max timestamp did not work, as it would return multiple rows. So I added a new row_number column, ordered by the timestamp column, grouped by id, and selected MAX(row_number). This gave me the latest action from the CDC rows that was applied to my table.
table.withColumn("row_number", row_number().over(Window.partitionBy("table_id").orderBy("dms_timestamp")))
The above is pyspark code as thats the framework I'm using to process my parquet files, but you can do the same in SQL. I noticed that when the records are ordered by the timestamp column, they maintain their original order even if the timestamps are the same.
Hope this could potentially help you with your sequential logic that you were tying to implement.

Migrate data from Mysql to Mysql with Kettle Spoon changing Timezone

I'm migrating and changing some data from one Mysql db to another Mysql db with a slightly different structure. The main difference is that on the first database dates are expressed in local timezone (Europe/Rome), instead on the target db they are UTC.
I'm sharing my db connection in all transformations.
I already made my transformations and everything works fine, but I didn't figure out a way to convert automatically all my dates in the right timezone. I was hoping to have something at connection level in order that all dates are automatically converted.
Otherwise I've add some extra transformation for every table and field (and they are many)!
I tried with the option serverTimezone at database level but it didn't work.
Does exist a smart way to do this conversion avoiding to add new transformations?
Eventually I found the solution of my problem using the approach described from adamnyc here.

Is mysql.timezone_name's Time_zone_id "static"?

I'm building a application that needs to store Timezones.
Rather than building my own table of timezones would it be copacetic to use mysql's already there table (mysql.time_zone_names)?
If I use a Time_zone_id of '94' and mysql updates its timezone tables, will 94 still be America/Chicago?
MySQL loads its timezone tables from your OS's /usr/share/lib/zoneinfo when you run mysql_tzinfo_to_sql. Then it keeps the data in the mysql database.
So the entries are only as stable as that file. If the OS adds or removes time zones in zoneinfo, and you reinstall your MySQL instance and run mysql_tzinfo_to_sql again to load the changed time zones, then the numeric time_zone_id values in MySQL could change.
I would recommend using the timezone name, not the numeric id.

How to update every mysql row after specific interval of time?

I went through lots of links but still I am confuse about the way that I could use.
I have mysql database at server side,when user hits server with some values, at same time I save current time of server also.(jsp is used at server side)
Now I want to update some values from row after specific interval of time from current time which saved in database.(Every row has different current time value.)
You will have to use the MySQL events. In this tutorial, you have an example of how to configure this via phpMyAdmin: http://www.youtube.com/watch?v=7ZRZoCsrKis.

Date value in mysql tables changes while exporting mysql db

I am exporting mysql table to setup it on live, but while exporting DB I noticed that my date column value is changing.. If it was "2007-06-11 00:00:00" earlier then after export it is now changed to "2007-06-10 18:30:00",
why this is so?
anybody have idea about this?
Bug #13052 existed in versions of MySQL prior to 5.0.15, in which dump files expressed TIMESTAMP columns in the server's timezone but did not include a SET TIME_ZONE command to ensure anyone (or any subsequent server) reading the dump file understood that; without such a command, receiving servers assume that any TIMESTAMP values are in its default timezone.
Therefore a transfer between servers in timezones offset by 18:30 (e.g. from South Australia to California) would lead to the behaviour you observe.
Solutions to this problem, in some vague order of preference, include:
Upgrade the version of mysqldump on the original server to 5.0.15 or later (will result in the dumpfile expressing all TIMESTAMP values in UTC, with a suitable SET TIME_ZONE statement at the start);
Prior to export (or import), change the global time_zone variable on the source (or destination) server, so that it matches the setting on the other server at the time of import (or export):
SET GLOBAL time_zone = 'America/Los_Angeles'; -- ('Australia/Adelaide')
UPDATE the data after the fact, applying MySQL's CONVERT_TZ() function:
UPDATE my_table
SET my_column = CONVERT_TZ(
my_column,
'America/Los_Angeles',
'Australia/Adelaide'
);
If using either solution 2 or solution 3, beware to use the exact timezone of the relevant server's time_zone variable, in such a manner as to include any daylight savings time. However, note that as documented under MySQL Server Time Zone Support: "Named time zones can be used only if the time zone information tables in the mysql database have been created and populated." The article goes on to explain how to create and populate the time zone information tables.
before export database just follow below steps:
export with custom option
uncheck the checkbox below
Dump TIMESTAMP columns in UTC (enables TIMESTAMP columns to be dumped and reloaded between servers in different time zones)
show in below image