I try to dump my Cloud SQL instance database from my local computer.
I know I should use gcloud commands but in the project I will use it would be a real pain to rewrite all the mysqldump instructions.
I can connect to Cloud SQL via the MySQL client, but when I try to use mysqldump I get the following:
mysqldump --databases testdb -h 130.211.xxx.xxx -u root -p > testdump.sql
mysqldump: Got error: 1227: Access denied; you need (at least one of) the SUPER privilege(s) for this operation when using LOCK TABLES
And of course CloudSQL doesn't support SUPER privileges... :/
Does anyone know if there's a way around?
Yes it accepts it, but you must first, use the cloud_sql_proxy, and have the right permissions. Also this is not in the documentation as of the moment, neither as a warning nor as an official method. Still using an intermediary bucket for dumps is not of my liking.
In Mac OS with latest mysqldump as of moment (posted of an example of the problems I encountered might vary with os and mysqldump version)
mysqldump --column_statistics=0 -h 127.0.0.1 -u <user> -p <db> --set-gtid-purged=OFF> <dumpFile>
// this is because I use the tcp connection sample for the cloud sql proxy
mysql -h 127.0.0.1 -u <user> -p -D <database> < DBs/mysqldump100519.sql
According to their documentation, seems you have 2 options.
The first, which you do not like is to use a gcloud command.
The second, use RESTful API to access the service which is, under the hood, used by gcloud commands. You may use the same request from inside you code. Take a look here.
Related
Hi All,
I am trying to restore nearly 8GB DB into remote server using mysql command in command prompt. It is been 8 hours since i started the process. But it still restores the DB. I tried with the command
> mysql -h hostname -u username -p dbname < location of the dump file
My questions are,
Does it take these much hours time to restore these amount of DB?
Is it possible to restore 8GB database?
Am i doing in correct way?
Is there any other better way to restore the DB?
In my opinion the answer of #Ferri is good, in cases like this the CLI is always the best option.
The only improvement that I suggest is to use gzip to reduce the weight of the script.
Dump the db like so:
mysqldump --host yourhost -u root --port 3306 -p yourdb | gzip -9 > yourdb.sql.gz
Restore the db like so:
gzip -cd yourdb.sql.gz | mysql -h yourhost -u root -p yourdb
Command
mysql -h IP -u Username -p schema < file
Example
mysql -h 192.168.10.122 -u root -p mydatabase < /tmp/20160628_test_minificated.sql
Does it take these much hours time to restore these amount of DB?
Depends of size of dumpfile and connection speed.
Is it possible to restore 8GB database?
Yes, by this way you can restore big databases.
Am i doing in correct way?
For me this is the best way when you are working from command line interface and destination also is a command line interface.
Is there any other better way to restore the DB?
Yes you have multiple options, like phpmyadmin, workbrench, heidisql and many others but each one have their own limitations.
I am doing some prototyping and so created a database with a few tables and dependencies. The project became bigger than I thought and now want to clean up the names, dependencies etc and so want to create the DB anew. But I don't want to go through the whole process of creating individual tables again, instead I want to start with what I have, clean the creation scripts up and run them if possible. Is there a way I can export all the scripts to create the DB and tables? Are there tools or mysql command line options to do this?
Thanks,
-S
This can get you started:
mysqldump -u user -ppassword -h host --no-create-db --no-data [other options] old_database > dump.sql
then you can edit the dump file for any necessary changes and import back into the new database:
mysql -u user -ppassword -h host new_database < dump.sql
More information about the mysqldump #MySQL Reference Manual
I recommend you to look at MySql WorkBench
It can do everything you need
Here's the list of all the features
Reverse Engineer from Live Database
Reverse Engineer from SQL Script
Also, good to mention that it's free (community version)
I want to execute a text file containing SQL queries, in MySQL.
I tried to run source /Desktop/test.sql and received the error:
mysql> . \home\sivakumar\Desktop\test.sql ERROR: Failed to open file
'\home\sivakumar\Desktop\test.sql', error: 2
Any idea on what I am doing wrong?
If you’re at the MySQL command line mysql> you have to declare the SQL file as source.
mysql> source \home\user\Desktop\test.sql;
You have quite a lot of options:
use the MySQL command line client: mysql -h hostname -u user database < path/to/test.sql
Install the MySQL GUI tools and open your SQL file, then execute it
Use phpmysql if the database is available via your webserver
you can execute mysql statements that have been written in a text file using the following command:
mysql -u yourusername -p yourpassword yourdatabase < text_file
if your database has not been created yet, log into your mysql first using:
mysql -u yourusername -p yourpassword
then:
mysql>CREATE DATABASE a_new_database_name
then:
mysql -u yourusername -p yourpassword a_new_database_name < text_file
that should do it!
More info here: http://dev.mysql.com/doc/refman/5.0/en/mysql-batch-commands.html
My favorite option to do that will be:
mysql --user="username" --database="databasename" --password="yourpassword" < "filepath"
I use it this way because when you string it with "" you avoiding wrong path and mistakes with spaces and - and probably more problems with chars that I did not encounter with.
With #elcuco comment I suggest using this command with [space] before so it tell bash to ignore saving it in history, this will work out of the box in most bash.
in case it still saving your command in history please view the following solutions:
Execute command without keeping it in history
extra security edit
Just in case you want to be extra safe you can use the following command and enter the password in the command line input:
mysql --user="username" --database="databasename" -p < "filepath"
All the top answers are good. But just in case someone wants to run the query from a text file on a remote server AND save results to a file (instead of showing on console), you can do this:
mysql -u yourusername -p yourpassword yourdatabase < query_file > results_file
Hope this helps someone.
I came here searching for this answer as well, and here is what I found works the best for me: Note I am using Ubuntu 16.x.x
Access mysql using:
mysql -u <your_user> - p
At the mysql prompt, enter:
source file_name.sql
Hope this helps.
Give the path of .sql file as:
source c:/dump/SQL/file_name.sql;
mysql> source C:\Users\admin\Desktop\fn_Split.sql
Do not specify single quotes.
If the above command is not working, copy the file to c: drive and try again.
as shown below,
mysql> source C:\fn_Split.sql
instead of redirection I would do the following
mysql -h <hostname> -u <username> --password=<password> -D <database> -e 'source <path-to-sql-file>'
This will execute the file path-to-sql-file
Never is a good practice to pass the password argument directly from the command line, it is saved in the ~/.bash_history file and can be accessible from other applications.
Use this instead:
mysql -u user --host host --port 9999 database_name < /scripts/script.sql -p
Enter password:
mysql -uusername -ppassword database-name < file.sql
So many ways to do it.
From Workbench: File > Run SQL Script -- then follow prompts
From Windows Command Line:
Option 1: mysql -u usr -p
mysql> source file_path.sql
Option 2: mysql -u usr -p '-e source file_path.sql'
Option 3: mysql -u usr -p < file_path.sql
Option 4: put multiple 'source' statements inside of file_path.sql (I do this to drop and recreate schemas/databases which requires multiple files to be run)
mysql -u usr -p < file_path.sql
If you get errors from the command line, make sure you have previously run
cd {!!>>mysqld.exe home directory here<<!!}
mysqld.exe --initialize
This must be run from within the mysqld.exe directory, hence the CD.
Hope this is helpful and not just redundant.
From linux 14.04 to MySql 5.7, using cat command piped with mysql login:
cat /Desktop/test.sql | sudo mysql -uroot -p
You can use this method for many MySQL commands to execute directly from Shell. Eg:
echo "USE my_db; SHOW tables;" | sudo mysql -uroot -p
Make sure you separate your commands with semicolon (';').
I didn't see this approach in the answers above and thought it is a good contribution.
Very likely, you just need to change the slash/blackslash:
from
\home\sivakumar\Desktop\test.sql
to
/home/sivakumar/Desktop/test.sql
So the command would be:
source /home/sivakumar/Desktop/test.sql
use the following from mysql command prompt-
source \\home\\user\\Desktop\\test.sql;
Use no quotation. Even if the path contains space(' ') use no quotation at all.
Since mysql -u yourusername -p yourpassword yourdatabase < text_file did not work on a remote server (Amazon's EC2)...
Make sure that the Database is created first.
Then:
mysql --host=localhost --user=your_username --password=your_password your_database_name < pathTofilename.sql
For future reference, I've found this to work vs the aforementioned methods, under Windows in your msql console:
mysql>>source c://path_to_file//path_to_file//file_name.sql;
If your root drive isn't called "c" then just interchange with what your drive is called. First try backslashes, if they dont work, try the forward slash. If they also don't work, ensure you have your full file path, the .sql extension on the file name, and if your version insists on semi-colons, ensure it's there and try again.
If you are here LOOKING FOR A DRUPAL ENVIRONMENT
You can run with drush command on your project directory
drush sqlc
If you are trying this command :
mysql -u root -proot -D database < /path/to/script.sql
You may get an error like this : if you have special characters, mainly '`'
ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '/path/to/script.sql' at line 1
So I would suggest to use a command like this :
echo "source /path/to/script.sql" | mysql -u root -proot -D database
This command will execute source /path/to/script.sql once connected to the server, which execute your script.
I had this error, and tried all the advice i could get to no avail.
Finally, the problem was that my folder had a space in the folder name which appearing as a forward-slash in the folder path, once i found and removed it, it worked fine.
I use Bash's Here Strings for an instant SQL execution:
mysql -uroot -p <<<"select date(now())"
https://www.gnu.org/software/bash/manual/html_node/Redirections.html#Here-Strings
How can I import existing MySQL database into Amazon RDS?
I found this page on the AWS docs which explains how to use mysqldump and pipe it into an RDS instance.
Here's their example code (use in command line/shell/ssh):
mysqldump acme | mysql --host=hostname --user=username --password acme
where acme is the database you're migrating over, and hostname/username are those from your RDS instance.
You can connect to RDS as if it were a regular mysql server, just make sure to add your EC2 IPs to your security groups per this forum posting.
I had to include the password for the local mysqldump, so my command ended up looking more like this:
mysqldump --password=local_mysql_pass acme | mysql --host=hostname --user=username --password acme
FWIW, I just completed moving my databases over. I used this reference for mysql commands like creating users and granting permissions.
Hope this helps!
There are two ways to import data :
mysqldump : If you data size is less than 1GB, you can directly make use of mysqldump command and import your data to RDS.
mysqlimport : If your data size is more than 1GB or in any other format, you can compress the data into flat files and upload the data using sqlimport command.
I'm a big fan of the SqlYog tool. It lets you connect to your source and target databases and sync schema and/or data. I've also used SQLWave, but switched to SqlYog. Been so long since I made the switch that I can't remember exactly why I switched. Anyway, that's my two cents. I know some will object to my suggestion of Windows GUI tools for MySQL. I actually like the SqlYog product so much that I run it from Wine (works flawlessly from Wine on Ubuntu for me).
This blog might be helpful.
A quick summary of a GoSquared Engineering post:
Configuration + Booting
Select a maintenance window and backup window when the instance will be at lowest load
Choose Multi-AZ or not (highly recommended for auto-failover and maintenance)
Boot your RDS instance
Configure security groups so your apps etc can access the new instance
Data migration + preparation
Enable binlogging if you haven't already
Run mysqldump --single-transaction --master-data=2 -C -q dbname -u username -p > backup.sql on the old instance to take a dump of the current data
Run mysql -u username -p -h RDS_endpoint DB_name < backup.sql to import the data into your RDS instance (this may take a while depending on your DB size)
In the meantime, your current production instance is still serving queries - this is where the master-data=2 and binlogging comes in
In your backup.sql file, you'll have a line at the top that looks like CHANGE MASTER TO MASTER_LOG_FILE=’mysql-bin.000003′, MASTER_LOG_POS=350789121;
Get the diff since backup.sql as an SQL file mysqlbinlog /var/log/mysql/mysql-bin.000003 --start-position=350789121 --base64-output=NEVER > output.sql
Run those queries on your RDS instance to update it cat output.sql | mysql -h RDS_endpoint -u username -p DB_name
Get the new log position by finding end_log_pos at the end of the latest output.sql file.
Get the diff since the last output.sql (like step 6) and repeat steps 7 + 8.
The actual migration
Have all your apps ready to deploy quickly with the new RDS instance
Get the latest end_log_pos from output.sql
Run FLUSH TABLES WITH READ LOCK; on the old instance to stop all writes
Start deploying your apps with the new RDS instance
Run steps 6-8 from above to update the RDS instance with the last queries to the old server
Conclusion
Using this method, you'll have a small amount of time (depending on how long it takes to deploy your apps + how many writes your MySQL instance serves - probably only a minute or two) with writes being rejected from your old server, but you will have a consistent migration with no read downtime.
A full and detailed post explaining how we (GoSquared) migrated to RDS with minimal downtime (including error debugging) is available here: https://engineering.gosquared.com/migrating-mysql-to-amazon-rds.
I am completely agree with #SanketDangi.
There are two ways of doing this one way is as suggested using either mysqldump or mysqlimport.
I have seen cases where it creates problem while restoring data on cloud gets corrupt.
However importing applications on cloud has became much easier now a days. You try uploading your DB server on to public cloud through ravello.
You can import your database server itself on Amazon using ravello.
Disclosure: I work for ravello.
Simplest example:
# export local db to sql file:
mysqldump -uroot -p —-databases qwe_db > qwe_db.sql
# Now you can edit qwe_db.sql file and change db name at top if you want
# import sql file to AWS RDS:
mysql --host=proddb.cfrnxxxxxxx.eu-central-1.rds.amazonaws.com --port=3306 --user=someuser -p qwe_db < qwe_db.sql
AWS RDS Customer data Import guide for Mysql is available here : http://aws.amazon.com/articles/2933
Create flat files containing the data to be loaded
Stop any applications accessing the target DB Instance
Create a DB Snapshot
Disable Amazon RDS automated backups
Load the data using mysqlimport
Enable automated backups again
If you are using the terminal this is what worked for me:
mysqldump -u local_username -plocal_password local_db_name | mysql -h myRDS-at-amazon.rds.amazonaws.com -u rds-username -prds_password_xxxxx remote_db_name
and then i used MYSQL WorkBench (free download) to check it was working because the command line was static after pressing submit, i could have probably put -v at end to see it's output
Note: there is no space after -p
Here are the steps which i have done and had sucess.
Take the MySQLdump of the needed database.
mysqldump -u username -p databasename --single-transaction --quick --lock-tables=false >databasename-backup-$(date +%F).sql
( Dont forget to replace the username as root – most of the times, and databasename -> Db name of database which you are going to migrate to RDS )
Once prompted, enter your password.
Once done, login to the RDS Instance from your MySQL server ( Make sure the security groups are configured to allow the connection from Ec2 to RDS )
mysql -h hostaddress -P 3306 -u rdsusername -p
( Dont forget to replace hostaddress with the address of your RDS Instance and rdsusernmae with username for your RDS Instance, when prompted give the password too )
You find that hostaddress under – Connectivity & security -> Endpoint & port under RDS Database From AWS Console.
Once logged in, create the database using MySQL commands :
create database databasename;
\q
Once Database is created in RDS, Import the SQL file created in Step 1 :
mysql -h hostaddress -u rdsusername -p databasename < backupfile.sql
This should import the SQL file to RDS and restore the contents into the new database.
Reference from : https://k9webops.com/blog/migrate-an-existing-database-on-mysql-mariadb-to-an-already-running-rds-instance-on-the-aws/
Which scripts/solutions do use for import and export large mysql databases?
Phpmyadmin gives an error for these operations, if there is a big amount of data.
http://sypex.net/en/ is better than Phpmyadmin in that
If you have access to the command line in both locations, mysqldump
For more verbose answers, you'll need to add much more information about your setup, e.g. whether you are on some sort of hosting package or a server of your own.
Using phpmyadmin is pointless for large databases. As of yet I have been using databases just over 1 GB in size, with over 12 million records. In my experience, the best way to export data is to use
mysqldump -h HOST -u USER -p database_name > export_file.sql
-h is optional in most cases. If you are on a remote server and the error "mysqldump: Got error: 1044: Access denied for user..." pops up, add --single-transaction;
mysqldump --single-transaction -h HOST -u USER -p database_name > export_file.sql
You can look up the reason here. To import the database you can use
mysql -h HOST -u USER -p database_name < export_file.sql