How can I move a database from one server to another? Completely - mysql

I'm transferring complete database from online server to localhost server.
But all records are not transferring. Is there any way to transfer complete data with same rows .
I tried via Navicat, export and import single tables, import and export .sql and gzip but all result are different
My Hosting is Shared.
Software on localhost Xamp

You can try mysqldump.
mysqldump -h hostname -u user -pPassWord --skip-triggers --single-transaction --complete-insert --extended-insert --quote-names --disable-keys dataBaseName > DUMP_dataBaseName.sql
then move you file DUMP_dataBaseName.sql to your localhost, and:
mysql -hHost -uUser -pPass -DBase < DUMP_dataBaseName.sql

Result is not missing probably your mysql tables on innodb click any table and see how many rows you are seeing .:) inno db now give the exact result

One issue I have run into countless times when moving WordPress sites are special characters (specifically single quotes and double quotes). The database will export fine, but upon import, it breaks at an "illegal" quote. My workflow now consists of exporting the database, running a find and replace on the sql file to filter out the offending characters, and then importing.
Without knowing more about your specific situation, that's just something I would look into.

Related

Foolproof methods for mysql -> mysql migration

I've been migrating ddbb (a few GB size) in mySQL workbench 6.1, from one mySQL server to another mySQL. Never having done this before I thought it was 99% reliable. Instead, 2 out of 3 tries have failed.
My ddbb dont have complex features (triggers, SP & functions,...). The errors, though, are difficult to interpret, almost always about tables failing to export, reason unknown. There might be occasionally a duplicated key index in source, but that shouldn't prevent an export from happening?
I've tried all the different methods available in the interface:
1) Server > Data Export > Data Import
2) Migration wizard
3) Schema transfer wizard
4) Reverse engineer
but no real difference.
Also, all methods seem variants of the same, do these menu options rely on the same procedure internally, how really different are they?
My questions are generic:
1) Is there a foolproof method, relaxed about errors, e.g. is
mysqldbcopy from myQL utilities much better that workbench wizards?
2) Does mySQL wizards configuration make any difference (e.g. a checkbox that causes errors by being too demanding if the source db has a problem) I just want to transfer the db, not perfection in the target server. I've switched SSL=NO, but still not working.
3) What is the single most important cause of errors in migration, e.g. server overloaded, enough memory, table structure?
Thanks in advance,
There might be occasionally a duplicated key index in source, but that shouldn't prevent an export from happening?
Yeah, It shouldn't prevent export operation.
I've tried all the different methods available in the interface:
All interface you have used might have some timeout configured so it don't really execute fully as your database is BIG.
So how to migrate MySQL database from one server to another?
To do it properly, I suggest you use command line like this:
Step 1: create backup file on old server
mysqldump -u [[user_name]] -p[[password]] [[db_name]] > db_backup.sql
Step 2: Transfer backup file to new server.
Step 3: Import backup file in new server.
mysql -u [[user_name]] -p[[password]] [[db_name]] < db_backup.sql
Pro tip:
you can combine step 1 & 2 if you have remote MySQL enabled on old server. Just execute this command on new server so it will download the backup file in current directory of new server.
mysqldump -h [[xxx.xx.xxx.xxx]] -u [[user_name]] -p[[password]] [[db_name]] > db_backup.sql
where [[xxx.xx.xxx.xxx]] represents ip address/hostname for old server.
Extra Note:
Please note that there is no space between -p and [[password]]. you can also omit the [[password]] if you think it's security issue to include password in command.
If you have access to your terminal you can try using "mysqldump" and also you could try percona xtrabackup tool.
Mysql dump : (If your DB is too large then I suggest you to use screens)
Backup all DB : mysqldump -u root -pxxxx --all-databases > all_db_backup.sql
Backup Tables : mysqldump -u root -pxxxx DatabaseName table1 table2 > tables.sql
Backup Individual databases : mysqldump -u root -pxxx --databases DB1 DB2 > Only_DB.sql
To import : Sync all the files to another server and try importing as show below
mysql -u root -pxxxx < all_db_backup.sql (Use Screen for large Databases)
Individual DB : mysql -u root -pxxx DBName < DB.sql
( Note : Before you import make sure your backuped file already has create database if not exists statements or you could create those DB names before importing )

Importing a MySQL Database on Localhost

So I wanted to format my system and I had a lot of works that I have done on my localhost that involves databases. I followed the normal way of backing up the database by exporting it into an SQL file but I think I made a mess by making a mistake of backing up everything in one SQL file (I mean the whole localhost was exported to just one SQL file).
The problem now is: when I try to import the backed up file I mean the (localhost.sql), I get an error like
tables already exist.
information_schema
performance_schema
an every other tables that comes with Xampp, which has been preventing me from importing the database.
These tables are the phpmyadmin tables that came with Xampp. I have been trying to get past this for days.
My question now is that can I extract different databases from the same compiled SQL database file?
To import a database you can do following things:
mysql -u username -p database_name < /path/to/database.sql
From within mysql:
mysql> use database_name;
mysql> source database.sql;
The error is quite self-explanatory. The tables information_schema and performance_schema are already in the MySQL server instance that you are trying to import to.
Both of these databases are default in MySQL, so it is strange that you would be trying to import these into another MySQL installation. The basic syntax to create a .sql file to import from the command line is:
$ mysqldump -u [username] -p [database name] > sqlfile.sql
Or for multiple databases:
$ mysqldump --databases db1 db2 db3 > sqlfile.sql
Then to import them into another MySQL installation:
$ mysql -u [username] -p [database name] < sqlfile.sql
If the database already exists in MySQL then you need to do:
$ mysqlimport -u [username] -p [database name] sqlfile.sql
This seems to be the command you want to use, however I have never replaced the information_schema or performance_schema databases, so I'm unsure if this will cripple your MySQL installation or not.
So an example would be:
$ mysqldump -uDonglecow -p myDatabase > myDatabase.sql
$ mysql -uDonglecow -p myDatabase < myDatabase.sql
Remember not to provide a password on the command line, as this will be visible in plain text in the command history.
The point the previous responders seem to be missing is that the dump file localhost.sql when fed into mysql using
% mysql -u [username] -p [databasename] < localhost.sql
generates multiple databases so specifying a single databasename on the command line is illogical.
I had this problem and my solution was to not specify [databasename] on the command line and instead run:
% mysql -u [username] -p < localhost.sql
which works.
Actually it doesn't work right away because of previous attempts
which did create some structure inside mysql, and those bits in localhost.sql
make mysql complain because they already exist from the first time around, so
now they can't be created on the second time around.
The solution to THAT is to manually edit localhost.sql with modifications like
INSERT IGNORE for INSERT (so it doesn't re-insert the same stuff, nor complain),
CREATE DATABASE IF NOT EXISTS for CREATE DATABASE,
CREATE TABLE IF NOT EXISTS for CREATE TABLE,
and to delete ALTER TABLE commands entirely if they generate errors because by then
they've already been executed ((and INSERTs and CREATEs perhaps too for the same reasons). You can check the tables with DESCRIBE TABLE and SELECT commands to make sure that the ALTERations, etc. have taken hold, for confidence.
My own localhost.sql file was 300M which my favorite editor emacs complained about, so I had to pull out bits using
% head -n 20000 localhost.sql | tail -n 10000 > 2nd_10k_lines.sql
and go through it 10k lines at a time. It wasn't too hard because drupal was responsible for an enormous amount, the vast majority, of junk in there, and I didn't want to keep any of that, so I could carve away enormous chunks easily.
unzip -p /pathoffile/database_file.zip | mysql -uusername -p databsename;
Best way to import database in localhost has simple 5 steps:
zip sql file first to compress databse size.
go to termianl.
create empty database.
Run Command unzip databse With Import database: unzip -p /pathoffile/database_file.zip | mysql -uusername -p databsename;
Enter Password

Complete database reset for MySQL dump?

This may seem like a very dumb question but I didn't learn it in any other way and I just want to have some clarification.
I started to use MySQL a while ago and in order to test various scenarios, I back up my databases. I used MySQL dump for that:
Export:
mysqldump -hSERVER -uUSER -pPASSWORD --all-databases > filename.sql
Import:
mysql -hSERVER -uUSER -pPASSWORD < filename.sql
Easy enough and it worked quite well up until now, when I noticed a little problem with this "setup": It does not fully "reset" the databases and tables. If, for example, there is an additional table added AFTER a dump file has been created, that additional table will not disappear if you import the same dump file. It essentially only "corrects" tables already there and recreates any databaes or tables missing, but does not remove any additional tables, which happen to have names that are not in the dump file.
What I want to do is to completely reset all the databases on a server when I import such a dump file. What would be the best solution? Is there a special import function reserved for that purpose or do I have to delete the databases myself first? Or is that a bad idea?
You can use the parameter --add-drop-database to add a "drop database" statement to the dump before each "create database" statement.
e.g.
mysqldump -hSERVER -uUSER -pPASSWORD --all-databases --add-drop-database >filename.sql
see here for details.
There's nothing magic about the dump and restore processes you describe. mysqldump writes out SQL statements that describe the current state of the database or databases you are dumping. It has to fetch a list of tables in each database you're dumping, then it has to read the tables one by one and write them out as SQL. On databases of any size, this takes time.
So, if you create a new table while mysqldump is running, it may not pick up that new table. Similarly, if your application software changes contents of tables while mysqldump is running, those changes may or may not show up in the backup.
You can look at the .sql files mysqldump writes out to see what they have picked up. If you want to be sure that your dumped .sql files are perfect, you need to run mysqldump on a quiet server -- one where nobody is running data definition language.
MySQL hot backup solutions are available. You may need to look into that.
The OP may want look into
mysql_install_db
if they want a fresh start with the post-install default
settings before restoring one or more dumped DBs. For
production servers, another useful script is:
mysql_secure_installation
Also, they may prefer to dump the DB(s) they created separately:
mysqldump -hSERVER -uUSER -pPASSWORD --database foo > foo.sql
to avoid inadvertently changing the internal DBs:
mysql, information_schema, performance_schema.

Export a large MySQL table as multiple smaller files

I have a very large MySQL table on my local dev server: over 8 million rows of data. I loaded the table successfully using LOAD DATA INFILE.
I now wish to export this data and import it onto a remote host.
I tried LOAD DATA LOCAL INFILE to the remote host. However, after around 15 minutes the connection to the remote host fails. I think that the only solution is for me to export the data into a number of smaller files.
The tools at my disposal are PhpMyAdmin, HeidiSQL and MySQL Workbench.
I know how to export as a single file, but not multiple files. How can I do this?
I just did an import/export of a (partitioned) table with 50 millions record, it needed just 2 minutes to export it from a reasonably fast machine and 15 minutes to import it on my slower desktop. There was no need to split the file.
mysqldump is your friend, and knowing that you have a lot of data it's better to compress it
#host1:~ $ mysqldump -u <username> -p <database> <table> | gzip > output.sql.gz
#host1:~ $ scp output.sql.gz host2:~/
#host1:~ $ rm output.sql.gz
#host1:~ $ ssh host2
#host2:~ $ gunzip < output.sql.gz | mysql -u <username> -p <database>
#host2:~ $ rm output.sql.gz
Take a look at mysqldump
Your lines should be (from terminal):
export to backupfile.sql from db_name in your mysql:
mysqldump -u user -p db_name > backupfile.sql
import from backupfile to db_name in your mysql:
mysql -u user -p db_name < backupfile.sql
You have two options in order to split the information:
Split the output text file into smaller files (as many as you need, many tools to do this, e.g. split).
Export one table each time using the option to add a table name after the db_name, like so:
mysqldump -u user -p db_name table_name > backupfile_table_name.sql
Compressing the file(s) (a text file) is very efficient and can minimize it to about 20%-30% of it's original size.
Copying the files to remote servers should be done with scp (secure copy) and interaction should take place with ssh (usually).
Good luck.
I found that the advanced options in phpMyAdmin allow me to select how many rows to export, plus the start point. This allows me to create as many dump files as required to get the table onto the remote host.
I had to adjust my php.ini settings, plus the phpMyAdmin config 'ExecTimeLimit' setting
as generating the dump files takes some time (500,000 rows in each).
I use HeidiSQL to do the imports.
As an example of the mysqldump approach for a single table
mysqldump -u root -ppassword yourdb yourtable > table_name.sql
Importing is then as simple as
mysql -u username -ppassword yourotherdb < table_name.sql
Use mysqldump to dump the table into a file.
Then use tar with -z option to zip the file.
Transfer it to your remote server (with ftp, sftp or other file transfer utility).
Then untar the file on remote server
Use mysql to import the file.
There is no reason to split the original file or to export in multiple files.
If you are not comfortable with using the mysqldump command line tool, here are two GUI tools that can help you with that problem, although you have to be able to upload them to the server via FTP!
Adminer is a slim and very efficient DB Manager tool that is at least as powerful as PHPMyAdmin and has only ONE SINGLE FILE that has to be uploaded to the server which makes it extremely easy to install. It works way better with large tables / DB than PMA does.
MySQLDumper is a tool developed especially to export / import large tables / DBs so it will have no problem with the situation you describe. The only dowside is that it is a bit more tedious to install as there are more files and folders (~350 files in ~1.5MB), but it shouldn't be a problem to upload it via FTP either, and it will definately get the job done :)
So my advice would be to first try Adminer and if that one also fails go the MySQLDumper route.
How do I split a large MySQL backup file into multiple files?
You can use mysql_export_explode
https://github.com/barinascode/mysql-export-explode
<?php
#Including the class
include 'mysql_export_explode.php';
$export = new mysql_export_explode;
$export->db = 'dataBaseName'; # -- Set your database name
$export->connect('host','user','password'); # -- Connecting to database
$export->rows = array('Id','firstName','Telephone','Address'); # -- Set which fields you want to export
$export->exportTable('myTableName',15); # -- Table name and in few fractions you want to split the table
?>
At the end of the SQL files are created in the directory where the script is executed in the following format
---------------------------------------
myTableName_0.sql
myTableName_1.sql
myTableName_2.sql
...

How do I get a tab delimited MySQL dump from a remote host ?

A mysqldump command like the following:
mysqldump -u<username> -p<password> -h<remote_db_host> -T<target_directory> <db_name> --fields-terminated-by=,
will write out two files for each table (one is the schema, the other is CSV table data). To get CSV output you must specify a target directory (with -T). When -T is passed to mysqldump, it writes the data to the filesystem of the server where mysqld is running - NOT the system where the command is issued.
Is there an easy way to dump CSV files from a remote system ?
Note: I am familiar with using a simple mysqldump and handling the STDOUT output, but I don't know of a way to get CSV table data that way without doing some substantial parsing. In this case I will use the -X option and dump xml.
mysql -h remote_host -e "SELECT * FROM my_schema.my_table" --batch --silent > my_file.csv
I want to add to codeman's answer. It worked but needed about 30 minutes of tweaking for my needs.
My webserver uses centos 6/cpanel and the flags and sequence which codeman used above did not work for me and I had to rearrange and use different flags, etc.
Also, I used this for a local file dump, its not just useful for remote DBs, because I had too many issues with selinux and mysql user permissions for SELECT INTO OUTFILE commands, etc.
What worked on my Centos+Cpanel Server
mysql -B -s -uUSERNAME -pPASSWORD < query.sql > /path/to/myfile.txt
Caveats
No Column Names
I cant get column names to appear at the top. I tried adding the flag:
--column-names
but it made no difference. I am still stuck on this one. I currently add it to the file after processing.
Selecting a Database
For some reason, I couldn't include the database name in the commandline. I tried with
-D databasename
in the commandline but I kept getting permission errors, so I ended using the following the top of my query.sql:
USE database_name;
On many systems, MySQL runs as a distinct user (such as user "mysql") and your mysqldump will fail if the MySQL user does not have write permissions in the dump directory - it doesn't matter what your own write permissions are in that directory. Changing your directory (at least temporarily) to world-writable (777) will often fix your export problem.