I'm attempting to dump all the databases from a 500Gb RDS instance into a smaller instance (100Gb). I have a lot of user permissions saved so I need to dump the mysql table.
mysqldump -h hostname -u username -ppassword --all-databases > dump.sql
Now when I try to upload the data to my new instance I get the following error:
mysql -h hostname -u username -ppassword < dump.sql`
ERROR 1044 (42000) at line 2245: Access denied for user 'staging'#'%' to database 'mysql'
I would just use a database snapshot to accomplish this, but my instance is smaller in size.
As a sanity check, I tried dumping the data into the original instance but got the same error. Can someone please advise on what I should do here? Thanks!
You may need to do the databases individually, or at least remove the mysql schema from the existing file (perhaps using grep to find the line counts for the USE database; statements and then sed to trim out the troublesome section, or see below), and then generate a dump file that doesn't monkey with the table structures or the proprietary RDS triggers in the MySQL schema.
I have not tried to restore the full mysql schema onto an RDS instance, but I can certainly see where it would go awry with the customizations in RDS and the lack of SUPER privilege... but it seems like these options on mysqldump should get you close, at least.
mysqldump --no-create-info # don't try to drop and recreate the mysql schema tables
--skip-triggers # RDS has proprietary triggers in the mysql schema
--insert-ignore # write INSERT IGNORE statements to ignore duplicates
--databases mysql # only one database, "mysql"
--skip-lock-tables # don't generate statements to LOCK TABLES/UNLOCK TABLES during restore
--single-transaction # to avoid locking up the source instance during the dump
If this is still too aggressive, then you will need to resort to dumping only the rows from the specific tables whose content you need to preserve ("user" and the other grant tables).
THERE IS NO WARRANTY on the following, but it's one from my collection. It's a one-liner that reads "old_dumpfile.sql" and writes "new_dumpfile.sql"... but switching the output off when it sees the USE or CREATE DATABASE statements with `mysql` on the same line, and switching it back on again the next time such a statement occurs without `mysql` in it. This will need to be modified if your dump file also has the DROP DATABASE statements in it, or you could generate a new dumpfile with --skip-add-drop-database.
Running your existing dump file through this should essentially remove only the mysql schema from that file, allowing you to easily restore it manually, first, and then let the rest of the database data flow in more smoothly.
perl -pe 'if (/(^USE\s|^CREATE\sDATABASE.*\s)`mysql`/) { $x = 1; } elsif (/^USE\s`/ || /^CREATE\sDATABASE/) { $x = 0; }; $_ = "" if $x;' old_dumpfile.sql > new_dumpfile.sql
I guess you can try to use workbench. There is a migration function there, create the smaller instance (100GB) first, then use that migration feature to migrate from 500GB to the 100GB one see if it works.
I have had too many access denied issues with the RDS MySQL. So running below command on RDS is my way out:
GRANT ALL ON `%`.* to '<type_the_usernamne_here>'#'%';
I am not sure whether this will be helpful in your case. But it has always been a life saviour for me.
Related
I've been migrating ddbb (a few GB size) in mySQL workbench 6.1, from one mySQL server to another mySQL. Never having done this before I thought it was 99% reliable. Instead, 2 out of 3 tries have failed.
My ddbb dont have complex features (triggers, SP & functions,...). The errors, though, are difficult to interpret, almost always about tables failing to export, reason unknown. There might be occasionally a duplicated key index in source, but that shouldn't prevent an export from happening?
I've tried all the different methods available in the interface:
1) Server > Data Export > Data Import
2) Migration wizard
3) Schema transfer wizard
4) Reverse engineer
but no real difference.
Also, all methods seem variants of the same, do these menu options rely on the same procedure internally, how really different are they?
My questions are generic:
1) Is there a foolproof method, relaxed about errors, e.g. is
mysqldbcopy from myQL utilities much better that workbench wizards?
2) Does mySQL wizards configuration make any difference (e.g. a checkbox that causes errors by being too demanding if the source db has a problem) I just want to transfer the db, not perfection in the target server. I've switched SSL=NO, but still not working.
3) What is the single most important cause of errors in migration, e.g. server overloaded, enough memory, table structure?
Thanks in advance,
There might be occasionally a duplicated key index in source, but that shouldn't prevent an export from happening?
Yeah, It shouldn't prevent export operation.
I've tried all the different methods available in the interface:
All interface you have used might have some timeout configured so it don't really execute fully as your database is BIG.
So how to migrate MySQL database from one server to another?
To do it properly, I suggest you use command line like this:
Step 1: create backup file on old server
mysqldump -u [[user_name]] -p[[password]] [[db_name]] > db_backup.sql
Step 2: Transfer backup file to new server.
Step 3: Import backup file in new server.
mysql -u [[user_name]] -p[[password]] [[db_name]] < db_backup.sql
Pro tip:
you can combine step 1 & 2 if you have remote MySQL enabled on old server. Just execute this command on new server so it will download the backup file in current directory of new server.
mysqldump -h [[xxx.xx.xxx.xxx]] -u [[user_name]] -p[[password]] [[db_name]] > db_backup.sql
where [[xxx.xx.xxx.xxx]] represents ip address/hostname for old server.
Extra Note:
Please note that there is no space between -p and [[password]]. you can also omit the [[password]] if you think it's security issue to include password in command.
If you have access to your terminal you can try using "mysqldump" and also you could try percona xtrabackup tool.
Mysql dump : (If your DB is too large then I suggest you to use screens)
Backup all DB : mysqldump -u root -pxxxx --all-databases > all_db_backup.sql
Backup Tables : mysqldump -u root -pxxxx DatabaseName table1 table2 > tables.sql
Backup Individual databases : mysqldump -u root -pxxx --databases DB1 DB2 > Only_DB.sql
To import : Sync all the files to another server and try importing as show below
mysql -u root -pxxxx < all_db_backup.sql (Use Screen for large Databases)
Individual DB : mysql -u root -pxxx DBName < DB.sql
( Note : Before you import make sure your backuped file already has create database if not exists statements or you could create those DB names before importing )
This may seem like a very dumb question but I didn't learn it in any other way and I just want to have some clarification.
I started to use MySQL a while ago and in order to test various scenarios, I back up my databases. I used MySQL dump for that:
Export:
mysqldump -hSERVER -uUSER -pPASSWORD --all-databases > filename.sql
Import:
mysql -hSERVER -uUSER -pPASSWORD < filename.sql
Easy enough and it worked quite well up until now, when I noticed a little problem with this "setup": It does not fully "reset" the databases and tables. If, for example, there is an additional table added AFTER a dump file has been created, that additional table will not disappear if you import the same dump file. It essentially only "corrects" tables already there and recreates any databaes or tables missing, but does not remove any additional tables, which happen to have names that are not in the dump file.
What I want to do is to completely reset all the databases on a server when I import such a dump file. What would be the best solution? Is there a special import function reserved for that purpose or do I have to delete the databases myself first? Or is that a bad idea?
You can use the parameter --add-drop-database to add a "drop database" statement to the dump before each "create database" statement.
e.g.
mysqldump -hSERVER -uUSER -pPASSWORD --all-databases --add-drop-database >filename.sql
see here for details.
There's nothing magic about the dump and restore processes you describe. mysqldump writes out SQL statements that describe the current state of the database or databases you are dumping. It has to fetch a list of tables in each database you're dumping, then it has to read the tables one by one and write them out as SQL. On databases of any size, this takes time.
So, if you create a new table while mysqldump is running, it may not pick up that new table. Similarly, if your application software changes contents of tables while mysqldump is running, those changes may or may not show up in the backup.
You can look at the .sql files mysqldump writes out to see what they have picked up. If you want to be sure that your dumped .sql files are perfect, you need to run mysqldump on a quiet server -- one where nobody is running data definition language.
MySQL hot backup solutions are available. You may need to look into that.
The OP may want look into
mysql_install_db
if they want a fresh start with the post-install default
settings before restoring one or more dumped DBs. For
production servers, another useful script is:
mysql_secure_installation
Also, they may prefer to dump the DB(s) they created separately:
mysqldump -hSERVER -uUSER -pPASSWORD --database foo > foo.sql
to avoid inadvertently changing the internal DBs:
mysql, information_schema, performance_schema.
A mysqldump command like the following:
mysqldump -u<username> -p<password> -h<remote_db_host> -T<target_directory> <db_name> --fields-terminated-by=,
will write out two files for each table (one is the schema, the other is CSV table data). To get CSV output you must specify a target directory (with -T). When -T is passed to mysqldump, it writes the data to the filesystem of the server where mysqld is running - NOT the system where the command is issued.
Is there an easy way to dump CSV files from a remote system ?
Note: I am familiar with using a simple mysqldump and handling the STDOUT output, but I don't know of a way to get CSV table data that way without doing some substantial parsing. In this case I will use the -X option and dump xml.
mysql -h remote_host -e "SELECT * FROM my_schema.my_table" --batch --silent > my_file.csv
I want to add to codeman's answer. It worked but needed about 30 minutes of tweaking for my needs.
My webserver uses centos 6/cpanel and the flags and sequence which codeman used above did not work for me and I had to rearrange and use different flags, etc.
Also, I used this for a local file dump, its not just useful for remote DBs, because I had too many issues with selinux and mysql user permissions for SELECT INTO OUTFILE commands, etc.
What worked on my Centos+Cpanel Server
mysql -B -s -uUSERNAME -pPASSWORD < query.sql > /path/to/myfile.txt
Caveats
No Column Names
I cant get column names to appear at the top. I tried adding the flag:
--column-names
but it made no difference. I am still stuck on this one. I currently add it to the file after processing.
Selecting a Database
For some reason, I couldn't include the database name in the commandline. I tried with
-D databasename
in the commandline but I kept getting permission errors, so I ended using the following the top of my query.sql:
USE database_name;
On many systems, MySQL runs as a distinct user (such as user "mysql") and your mysqldump will fail if the MySQL user does not have write permissions in the dump directory - it doesn't matter what your own write permissions are in that directory. Changing your directory (at least temporarily) to world-writable (777) will often fix your export problem.
I've exported a mysqldump of a database with InnoDB tables and foreign key relationships in them, using the --single-transaction flag (that I read somewhere I should use for InnoDB). No problems.
But when trying to import that dump into another existing database (same database, different server) I get all sorts of errors when trying to drop the tables because it would break the InnoDB relationships.
I also read that I should use foreign_key_checks=0 to avoid this, but this is a server variable, not part of the dump process. So I'm trying to figure out how to automate all this since I have a script that backs up the DB, it was working when all we had were MyISAM tables:
mysqldump -u user -p'password' --single-transaction -q database | ssh user#backup.com mysql -u user -p'password' database
Thanks.
You can dump into a file, add the required SET FOREIGN_KEY_CHECKS=0; in that file, and then feed the file to mysql.
It turns out that the mysqldump file is smart enough to detect that they are InnoDB tables and puts the appropriate comments at the top of the file. My problem was that when I exported through PHPMyAdmin it didn't put the correct comments on the file, hence causing all this trouble.
Thanks for your response.
You can also add to the mysql command line when restoring without editing the original file. This is very useful as mysql backups can become huge, and editing a GB+ file takes lots of CPU time versus adding this to the commandline,
mysql -D YourDatabaseName -u YourUserName -p --init-command="set ##foreign_key_checks=0"<YourBackupDumpFile.sql
I have a mysql dump created with mysqldump that holds all the tables in my database and all their data. However I only want to restore two tables. (lets call them kittens and kittens_votes)
How would I restore those two tables without restoring the entire database?
Well, you have three main options.
You can manually find the SQL statements in the file relating to the backed up tables and copy them manually. This has the advantage of being simple, but for large backups it's impractical.
Restore the database to a temporary database. Basically, create a new db, restore it to that db, and then copy the data from there to the old one. This will work well only if you're doing single database backups (If there's no CREATE DATABASE command(s) in the backup file).
Restore the database to a new database server, and copy from there. This works well if you take full server backups as opposed to single database backups.
Which one you choose will depend upon the exact situation (including how much data you have)...
You can parse out CREATE TABLE kittens|kitten_votes AND INSERT INTO ... using regexp, for example, and only execute these statements. As far as I know, there's no other way to "partially restore" from dump.
Open the .sql file and copy the insert statements for the tables you want.
create a new user with access to only those 2 tables. Now restore the DB with -f (force) option that will ignore the failed statements and execute only those statements it has permission to.
What you want is a "Single Table Restore"
http://hashmysql.org/wiki/Single_table_restore
A few options are outlined above ... However the one which worked for me was:
Create a new DB
$ mysql -u root -p CREATE DATABASE temp_db
Insert the .sql file ( the one with the desired table ) into the new DB
$ mysql -u root -p temp_db < ~/full/path/to/your_database_file.sql
dump the desired table
$ mysqldump -u root -p temp_db awesome_single_table > ~/awesome_single_table.sql
import desired table
$ mysql -u root -p original_database < ~/awesome_single_table.sql
Then delete the temp_db and you're all golden!