MYSQL data import changes time - mysql

I'm having issues with timezones with MYSQL. Right now, I'm dumping the data via mysqldump, as such:
mysqldump -uuser -ppass --tab c:/temp --skip-dump-date dbName
This works exactly as intended, with the data in the database matching the .txt files that are generated. The problem comes with importing the data. To get around some foreign key issues during the import phase, I'm using the following code:
SET FOREIGN_KEY_CHECKS = 0;
LOAD DATA LOCAL INFILE 'c:/temp/tableName.txt' INTO TABLE tableName;
SET FOREIGN_KEY_CHECKS = 1;
This also has no issues, except afterwards when I check the database, all the TIMESTAMPs are shifted by 5 hours forwards. I know this has to be a problem with time zones (I'm UTC-05:00, so the shift time makes sense), but I don't understand what should be done to stop the database from assuming that a shift in time needs to be done.
Along my searches for answers, I came across a similar SO problem, but the issue was backwards. Importing was fine, exporting was shifted.
MySQL data export changes times
Furthermore, I have seen some suggestions to look at this information in MYSQL, but I don't exactly know what I should do with this information now that I have it.
SELECT ##global.time_zone, ##session.time_zone;
Gives me:
SYSTEM +00:00
Is there a way to tell MYSQL to import without changing TIMESTAMPs? Should I change some sort of time zone setting? If so, should I change it for import, or export? I'm not planning on moving the database across any time zones.
UPDATE 1
In the mean time until I know best practice, I have tried the following change directly before using the block of LOAD DATA commands:
SET TIME_ZONE = '+00:00';
This has solved my problem (where I expect my dump to be the same as the files I used to create the database). Afterwards, I change the time back to -05:00, but I'm not sure if it was necessary.

Use the --tz-utc option to `mysqldump. From the documentatin:
This option enables TIMESTAMP columns to be dumped and reloaded between servers in different time zones. mysqldump sets its connection
time zone to UTC and adds SET TIME_ZONE='+00:00' to the dump file. Without this option, TIMESTAMP columns are dumped and reloaded in
the time zones local to the source and destination servers, which can cause the values to change if the servers are in different time
zones. --tz-utc also protects against changes due to daylight saving time. --tz-utc is enabled by default. To disable it, use
--skip-tz-utc.

Related

Exporting with phpMyAdmin changes timestamps

When exporting a file through phpMyAdmin some timestamps are put back an hour. How may I prevent this? I do not want timestamps tampered with. Here's a screenshot for the curious (see the dates.)
I believe the cause may be the SET time_zone = "+00:00"; that is added to every export file.
Is this suppose to happen? Is it a known bug?
I'm running:
-- Server version: 5.5.37-0ubuntu0.14.04.1
-- PHP Version: 5.5.9-1ubuntu4
The times are not actually being 'tampered with'.
MySQL interally stores TIMESTAMP columns converted to UTC time, then uses a mixture of system and session (client session) values to determine what to display to the user.
You can check both of these values running the following query yourself.
SELECT ##global.time_zone, ##session.time_zone;
So when your PHPMA script generates its dump, its specifying a session time_zone variable so when you run it MySQL will convert them all from that timezone back to UTC. When you then go to import that to another database, it will still convert them back to the UTC values you're expecting.
So to summarise if the values in the dump with SET time_zone = "+00:00"; are all "1 hour behind" the values you see when querying via PHPMyAdmin, this only appears this way because the connection via PHPMyAdmin will have it's timezone one hour ahead of UTC.

Large (1G) MySQL db taking 30 hours to import into WAMP, plus Null errors

I've been working on this for days, pretty frustrated.
Have a Magento database, about 1Gb with 3MM records - need to make a backup and import it onto my local machine. Local machine is running WAMP on a brand new gaming rig specs with 16 Gb RAM). Exported the db fine using PHPMyAdmin into a .sql file.
Saw BigDump was highly recommended to import a large db. Also find a link that says it's recommended for the syntax to include column names in every INSERT statement Done. ( http://www.atomicsmash.co.uk/blog/import-large-sql-databases/ )
Start importing. Hours go by (around 3-4). Get an error: Page unavailable, or wrong url! More searching, try suggestions ( mostly here: http://www.sitehostingtalk.com/f16/bigdump-error-page-unavailable-wrong-url-56939/ ) to drop the $linespersession to 500 and add a $delaypersession of 300. Run again, more hours, same error.
I then re-exported the db into two .sql dumps (one that held all the large tables with over 100K records), repeat, same error. So I quit using Bigdump.
Next up was the command line! Using Console2 I ran source mydump.sql. 30 hours go by. Then an error:
ERROR 1231 (42000): Variable 'character_set_client' can't be set to the value of 'NULL'
ERROR 1231 (42000): Variable 'collation_connection' can't be set to the value of 'NULL'
More searching, really varied explanations. I tried with the split files from before - run it again, same error.
I can't figure out what would cause both of these errors. I know that I got the same error on two different exports. I know there are a few tables that are between 1-300,000 rows. I also don't think 30 hours is normal (on a screaming fast machine) for an import of only a 1Gb but I could be wrong.
What other options should I try? Is it the format of the export? Should it be compressed or not? Is there a faster way of importing? Any way of making this go faster?
Thanks!
EDIT
Thanks to some searching and #Bill Karwin suggestion here's where I'm at:
Grabbed a new mysqldump using ssh and downloaded it.
Imported the database 10 different times. Each time was MUCH faster (5-10 mins) so that fixed the ridiculous import time.
used command line, >source dump.sql
However, each import from that same dump.sql file has a different number of records. Of the 3 million records they differ by between 600 and 200,000 records. One of the imports has 12,000 MORE records than the original. I've tried with and without setting the foreign_key_checks = 0; I tried running the same query multiple times with exactly the same settings. Every time the number of rows are different.
I'm also getting these errors now:
ERROR 1231 (42000): Variable 'time_zone' can't be set to the value of 'NULL'
ERROR 1231 (42000): Variable 'sql_mode' can't be set to the value of 'NULL'
ERROR 1231 (42000): Variable 'foreign_key_checks' can't be set to the value of 'NULL'
ERROR 1231 (42000): Variable 'unique_checks' can't be set to the value of 'NULL'
ERROR 1231 (42000): Variable 'character_set_client' can't be set to the value of 'NULL'
ERROR 1231 (42000): Variable 'collation_connection' can't be set to the value of 'NULL'
ERROR 1231 (42000): Variable 'sql_notes' can't be set to the value of 'NULL'
Doesn't seem like these are that important from what I read. There are other warnings but I can't seem to determine what they are.
Any ideas?
EDIT: Solution removed here and listed below as a separate post
References:
https://serverfault.com/questions/244725/how-to-is-mysqls-net-buffer-length-config-viewed-and-reset
http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html#sysvar_net_buffer_length
Make phpMyAdmin show exact number of records for InnoDB tables?
Export a large MySQL table as multiple smaller files
https://dba.stackexchange.com/questions/31197/why-max-allowed-packet-is-larger-in-mysqldump-than-mysqld-in-my-cnf
No, that's not a normal restore time, unless you're running MySQL on a 15 year old computer or you're trying to write the database to a shared volume over a very slow network. I can import a data dump of about that size in about 45 minutes, even on an x-small EC2 instance.
The error about setting variables to NULL appears to be a limitation of BigDump. It's mentioned in the BigDump FAQ. I have never seen those errors from restoring a dump file with the command-line client.
So here are some recommendations:
Make sure your local MySQL data directory is on a locally-attached drive -- not a network drive.
Use the mysql command-line client, not phpMyAdmin or BigDump.
mysql> source mydump.sql
Dump files are mostly a long list of INSERT statements, you can read Speed of INSERT Statements for tips on speeding up INSERT. Be sure to read the sub-pages it links to.
For example, when you export the database, check the radiobutton for "insert multiple rows in every INSERT statement" (this is incompatible with BigDump, but better for performance when you use source in the mysql client).
Durability settings are recommended for production use, but they come with some performance penalties. It sounds like you're just trying to get a development instance running, so reducing the durability may be worthwhile, at least while you do your import. A good summary of reducing durability is found in MySQL Community Manager Morgan Tocker's blog: Reducing MySQL durability for testing.
Re your new questions and errors:
A lot of people report similar errors when importing a large dump file created by phpMyAdmin or Drupal or other tools.
The most likely cause is that you have some data in the dump file that is larger than max_allowed_packet. This MySQL config setting is the largest size for an individual SQL statement or an individual row of data. When you exceed this in an individual SQL statement, the server aborts that SQL statement, and closes your connection. The mysql client tries to reconnect automatically and resume sourcing the dump file, but there are two side-effects:
Some of your rows of data failed to load.
The session variables that preserve #time_zone and other settings during the import are lost, because they are scoped to the session. When the reconnect happens, you get a new session.
The fix is to increase your max_allowed_packet. The default level is 4MB on MySQL 5.6, and only 1MB on earlier versions. You can find out what your current value for this config is:
mysql> SELECT ##max_allowed_packet;
+----------------------+
| ##max_allowed_packet |
+----------------------+
| 4194304 |
+----------------------+
You can increase it as high as 1GB:
mysql> set global max_allowed_packet = 1024*1024*1024;
Then try the import again:
mysql> source mydump.sql
Also, if you're measuring the size of the tables with a command like SHOW TABLE STATUS or a query against INFORMATION_SCHEMA.TABLES, you should know that the TABLE_ROWS count is only an estimate -- it can be pretty far off, like +/- 10% (or more) of the actual number of rows of the table. The number reported is even likely to change from time to time, even if you haven't changed any data in the table. The only true way to count rows in a table is with SELECT COUNT(*) FROM SomeTable.
SOLUTION
For anyone who wanted a step by step:
Using PuTTY, grab a mysql dump of the database (don't include everything to the right of the > and replace all capitals with the appropriate info)
> mysqldump -uUSERNAME -p DATABASENAME > DATABASE_DUMP_FILE_NAME.sql
You'll get a password prompt, type it in, hit enter. Wait till you get a prompt again. If you're using an FTP client go to the root of your host and you should see your file there, download it.
Locally get a mysql prompt by navigating to where your mysql.exe file is (there's a few ways of doing this, this is one of them) and typing:
> mysql.exe -use NEW_DATABASE -u USERNAME
Now you're in the mysql prompt. Turn on warnings...just in case
mysql > \W;
Increase the max_allowed_packet to a true Gig. I've seen references to also changing the net_buffer_length but after 5.1.31 it doesn't seem to be changed (link at bottom)
mysql > SET global max_allowed_packet = 1024*1024*1024;
Now import your sql file
mysql > source C:\path\to\DATABASE_DUMP_FILE_NAME.sql
If you want to check if all of the records imported you could either type SELECT COUNT(*) FROM SomeTable OR
Go to C:\wamp\apps\phpmyadmin\config.inc.php
At the bottom before the ?> add:
/* Show the exact count of each row */
$cfg['MaxExactCount'] = 2000000;
This is only recommended for a development platform - but really handy when you have to scan a bunch of tables / databases. Will probably slow down the works with large sets.

Sql Server issue

I hope this is not off-topic, but I have a real problem that I oculd use some advice on.
I have an application that upgrades its own Sql Server database (from previous versions) on startup. Normally this works well, but a new version has to alter several nvarchar column widths.
On live databases with large amount of data in the table this is taking a very long time. There appear to be two problems - one is that Sql Server seems to be processing the data (possibly rewriting it), even though that isn't actually being changed, and the other is that the transaction log gobbles up a huge amount of space.
Is there any way to circumvent this issue? It's only a plain Alter Table... Alter Column command, changing nvarchar(x) to nvarchar(x+n), nothing fancy, but it is causing an 'issue' and much dissatisfaction in the field. If there was a way of changing the column width without processing the existing data, and somehow suppressing the transaction log stuff, that would be handy.
It doesn't seem to be a problem with Oracle databases.
An example command:
IF EXISTS (SELECT 1 FROM information_schema.COLUMNS WHERE table_name='ResourceBookings' AND column_name = ('ResourceBookerKey1') AND character_maximum_length <= 50)
ALTER TABLE [ResourceBookings] ALTER COLUMN [ResourceBookerKey1] NVARCHAR(80) NULL
As you can see, the table is only changed if the column width needs to be increased
TIA
Before upgrading, make sure the SQL Server database's Recovery Model is set to "Simple". Go to SSMS, right-click the database, select properties, and then click on the Options pages. Record the "Recovery Mode" value. Set the Recovery Model to "Simple", if it's not already (I assume it's set to FULL).
Then run the upgrade. After the upgrade, you can restore the value back to what it was.
Alternately you can script it with something like this:
Before upgrade:
ALTER DATABASE MyDatabase SET RECOVERY SIMPLE;
After upgrade:
ALTER DATABASE MyDatabase SET RECOVERY FULL;

Date value in mysql tables changes while exporting mysql db

I am exporting mysql table to setup it on live, but while exporting DB I noticed that my date column value is changing.. If it was "2007-06-11 00:00:00" earlier then after export it is now changed to "2007-06-10 18:30:00",
why this is so?
anybody have idea about this?
Bug #13052 existed in versions of MySQL prior to 5.0.15, in which dump files expressed TIMESTAMP columns in the server's timezone but did not include a SET TIME_ZONE command to ensure anyone (or any subsequent server) reading the dump file understood that; without such a command, receiving servers assume that any TIMESTAMP values are in its default timezone.
Therefore a transfer between servers in timezones offset by 18:30 (e.g. from South Australia to California) would lead to the behaviour you observe.
Solutions to this problem, in some vague order of preference, include:
Upgrade the version of mysqldump on the original server to 5.0.15 or later (will result in the dumpfile expressing all TIMESTAMP values in UTC, with a suitable SET TIME_ZONE statement at the start);
Prior to export (or import), change the global time_zone variable on the source (or destination) server, so that it matches the setting on the other server at the time of import (or export):
SET GLOBAL time_zone = 'America/Los_Angeles'; -- ('Australia/Adelaide')
UPDATE the data after the fact, applying MySQL's CONVERT_TZ() function:
UPDATE my_table
SET my_column = CONVERT_TZ(
my_column,
'America/Los_Angeles',
'Australia/Adelaide'
);
If using either solution 2 or solution 3, beware to use the exact timezone of the relevant server's time_zone variable, in such a manner as to include any daylight savings time. However, note that as documented under MySQL Server Time Zone Support: "Named time zones can be used only if the time zone information tables in the mysql database have been created and populated." The article goes on to explain how to create and populate the time zone information tables.
before export database just follow below steps:
export with custom option
uncheck the checkbox below
Dump TIMESTAMP columns in UTC (enables TIMESTAMP columns to be dumped and reloaded between servers in different time zones)
show in below image

question about MySQL database migration

If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc..
Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database.
Using MySQLDump etc.. won't work. Here is the detail. For example: I have FOUR old databases, I can name them as 'DB_a', 'DB_b', 'DB_c', 'DB_d'. Now the old table A has 28 columns, I want to add each row in table A into the new database with a new column ID 'DB_x' (x to indicate which database it comes from). If I can't differentiate the database ID by the row's content, the only way I can identify them is going through some user input parameters.
Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while.
Thanks!!
I don't entirely understand your situation with the columns (wouldn't it be more sensible to add any new columns after migration?), but one of the arguably fastest methods to copy a database across servers is mysqlhotcopy. It can copy myISAM only and has a number of other requirements, but it's awfully fast because it skips the create dump / import dump step completely.
Generally when you migrate a database to new servers, you don't apply a bunch of schema changes at the same time, for the reasons that you're running into right now.
MySQL has a dump tool called mysqldump that can be used to easily take a snapshot/backup of a database. The snapshot can then be copied to a new server and installed.
You should figure out all the changes that have been done to your "new" database, and write out a script of all the SQL commands needed to "upgrade" the old database to the new version that you're using (e.g. ALTER TABLE a ADD COLUMN x, etc). After you're sure it's working, take a dump of the old one, copy it over, install it, and then apply your change script.
Use mysqldump to dump the data, then echo output.txt > msyql. Now the old data is on the new server. Manipulate as necessary.
Sure there are tools that can help you achieving what you're trying to do. Mysqldump is a premier example of such tools. Just take a glance here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
What you could do is:
1) You make a dump of the current db, using mysqldump (with the --no-data option) to fetch the schema only
2) You alter the schema you have dumped, adding new columns
3) You create your new schema (mysql < dump.sql - just google for mysql backup restore for more help on the syntax)
4) Dump your data using the mysqldump complete-insert option (see link above)
5) Import your data, using mysql < data.sql
This should do the job for you, good luck!
Adding extra rows can be done on a live database:
ALTER TABLE [table-name] ADD [row-name] MEDIUMINT(8) default 0;
MySql will default all existing rows to the default value.
So here is what I would do:
make a copy of you're old database with MySql dump command.
run the resulting SQL file against you're new database, now you have an exact copy.
write a migration.sql file that will modify you're database with modify table commands and for complex conversions some temporary MySql procedures.
test you're script (when fail, go to (2)).
If all OK, then goto (1) and go live with you're new database.
These are all valid approaches, but I believe you want to write a sql statement that writes other insert statements that support the new columns you have.