I need to delete the data from all tables but one table in my database. Let's assumme the database is called my_database and the table in which data should be preserved is called my_important_table, so is there any way to achieve this?
I was able to figure out this problem thanks to these questions:
Truncate all tables in a MySQL database in one command? (check most voted answer)
mysql: What is the right syntax for NOT LIKE? (check validated answer)
The following command worked properly for me:
mysql -u root -p -Nse "SHOW TABLES WHERE \`Tables_in_my_database\` != 'my_important_table'" my_database | while read table; do echo "SET FOREIGN_KEY_CHECKS = 0; truncate table $table;"; done | mysql -u root -p my_database
The following command is the same as the previous one, but I split it into multiple lines to improve visualization.
mysql -u root -p -Nse "SHOW TABLES WHERE \`Tables_in_my_database\` != 'my_important_table'" my_database | \
while read table; do echo "SET FOREIGN_KEY_CHECKS = 0; truncate table $table;"; done | \
mysql -u root -p my_database
I have database A. I issue this command against it:
mysqldump --host=localhost -uroot -p"mypassword" my_db_name > file.sql
now I take this file to machine B, running mysql too. I create a database:
create database newdb;
I then:
mysql --host=localhost -uroot -proot newdb < file.sql
My problem is that not all tables that exist in file.sql are created in the new database! I clearly see CREATE TABLES users in the content of the file.sql followed by thousands of INSERT calls for content in that table.
But users table is never created in the new database. I am completely lost as to why.
If you have foreign keys, the tables might be created in the wrong order and since the constraints can't be created, creating the table fails. Try adding SET FOREIGN_KEY_CHECKS=0 in the beginning of the dump and SET FOREIGN_KEY_CHECKS=0 at the end.
Delete whole newdb database;
Restart mysqld;
Run mysqlcheck --repair --all-databases -u root -p root on machine B;
Create newdb again (or maybe call it newdb2 just to be sure);
Delete file.sql on machine B, copy file.sql again from machine A and import by mysql --host=localhost -uroot -proot newdb < file.sql;
Run SHOW engine innodb STATUS; and or show table status and analyze results.
Copy a CREATE TABLE that failed to work. In the commandline tool "mysql", paste that. What messages, if any do you get? Does it create the table?
Please provide that CREATE for us; there may be some odd clues.
Also provide SHOW VARIABLES LIKE '%enforce%';
So I try to import sql file into rds (1G MEM, 1 CPU). The sql file is like 1.4G
mysql -h xxxx.rds.amazonaws.com -u user -ppass --max-allowed-packet=33554432 db < db.sql
It got stuck at:
ERROR 1227 (42000) at line 374: Access denied; you need (at least one of) the SUPER privilege(s) for this operation
The actual sql content is:
/*!50003 CREATE*/ /*!50017 DEFINER=`another_user`#`1.2.3.4`*/ /*!50003 TRIGGER `change_log_BINS` BEFORE INSERT ON `change_log` FOR EACH ROW
IF (NEW.created_at IS NULL OR NEW.created_at = '00-00-00 00:00:00' OR NEW.created_at = '') THEN
SET NEW.created_at = NOW();
END IF */;;
another_user is not existed in rds, so I do:
GRANT ALL PRIVILEGES ON db.* TO another_user#'localhost';
Still no luck.
Either remove the DEFINER=.. statement from your sqldump file, or replace the user values with CURRENT_USER.
The MySQL server provided by RDS does not allow a DEFINER syntax for another user (in my experience).
You can use a sed script to remove them from the file:
sed 's/\sDEFINER=`[^`]*`#`[^`]*`//g' -i oldfile.sql
Remove the 3 lines below if they're there, or comment them out with -- :
At the start:
-- SET ##SESSION.SQL_LOG_BIN= 0;
-- SET ##GLOBAL.GTID_PURGED=/*!80000 '+'*/ '';
At the end:
-- SET ##SESSION.SQL_LOG_BIN = #MYSQLDUMP_TEMP_LOG_BIN;
Note that the comment characters are "dash dash space" including the space.
A better solution is to stop these lines from being written to the dump file at all by including the option --set-gtid-purged=OFF on your mysqldump command.
Another useful trick is to invoke mysqldump with the option --set-gtid-purged=OFF which does not write the following lines to the output file:
SET ##SESSION.SQL_LOG_BIN= 0;
SET ##GLOBAL.GTID_PURGED=/*!80000 '+'*/ '';
SET ##SESSION.SQL_LOG_BIN = #MYSQLDUMP_TEMP_LOG_BIN;
not sure about the DEFINER one.
When we create a new RDS DB instance, the default master user is not the root user. But only gets certain privileges for that DB instance. This permission does not include SET permission. Now if your default master user tries to execute mysql SET commands, then you will face this error: Access denied; you need (at least one of) the SUPER or SYSTEM_VARIABLES_ADMIN privilege(s) for this operation
Solution 1
Comment out or remove these lines
SET #MYSQLDUMP_TEMP_LOG_BIN = ##SESSION.SQL_LOG_BIN;
SET ##SESSION.SQL_LOG_BIN= 1;
SET ##GLOBAL.GTID_PURGED=/*!80000 '+'*/ '';
Solution 2
You can also ignore the errors by using the -f option to load the rest of the dump file.
mysql -f <REPLACE_DB_NAME> -u <REPLACE_DB_USER> -h <DB_HOST_HERE> -p < dumpfile.sql
Just a MacOS extra update for hjpotter92 answer.
To make sed recognize the pattern in MacOS, you'll have to add a backslash before the = sign, like this:
sed -i old 's/\DEFINER\=`[^`]*`#`[^`]*`//g' file.sql
Problem: You're trying to import data (using mysqldump file) to your mysql database ,but it seems you don't have permission to perform that operation.
Solution: Assuming you data is migrated ,seeded and updated in your mysql database, take snapshot using mysqldump and export it to file
mysqldump -u [username] -p [databaseName] --set-gtid-purged=OFF > [filename].sql
From mysql documentation:
GTID - A global transaction identifier (GTID) is a unique identifier created
and associated with each transaction committed on the server of origin
(master). This identifier is unique not only to the server on which it
originated, but is unique across all servers in a given replication
setup. There is a 1-to-1 mapping between all transactions and all
GTIDs.
--set-gtid-purged=OFF SET ##GLOBAL.gtid_purged is not added to the output, and SET
##SESSION.sql_log_bin=0 is not added to the output. For a server where
GTIDs are not in use, use this option or AUTO. Only use this option
for a server where GTIDs are in use if you are sure that the required
GTID set is already present in gtid_purged on the target server and
should not be changed, or if you plan to identify and add any missing
GTIDs manually.
Afterwards connect to your mysql with user root ,give permissions , flush them ,and verify that your user privileges were updated correctly.
mysql -u root -p
UPDATE mysql.user SET Super_Priv='Y' WHERE user='johnDoe' AND host='%';
FLUSH PRIVILEGES;
mysql> SHOW GRANTS FOR 'johnDoe';
+------------------------------------------------------------------+
| Grants for johnDoe |
+------------------------------------------------------------------+
| GRANT USAGE ON *.* TO `johnDoe` |
| GRANT ALL PRIVILEGES ON `db1`.* TO `johnDoe` |
+------------------------------------------------------------------+
now reload the data and the operation should be permitted.
mysql -h [host] -u [user] -p[pass] [db_name] < [mysql_dump_name].sql
Full Solution
All the above solutions are fine. And here I'm gonna combine all the solutions so that it should work for all the situations.
Fixed DEFINER
For Linux and Mac
sed -i old 's/\DEFINER\=`[^`]*`#`[^`]*`//g' file.sql
For Windows
download atom or notepad++, open your dump sql file with atom or notepad++, press Ctrl+F
search the word DEFINER, and remove the line DEFINER=admin#% (or may be little different for you) from everywhere and save the file.
As for example
before removing that line: CREATE DEFINER=admin#% PROCEDURE MyProcedure
After removing that line: CREATE PROCEDURE MyProcedure
Remove the 3 lines
Remove all these 3 lines from the dump file. You can use sed command or open the file in Atom editor and search for each line and then remove the line.
Example: Open Dump2020.sql in Atom, Press ctrl+F, search SET ##SESSION.SQL_LOG_BIN= 0, remove that line.
SET ##SESSION.SQL_LOG_BIN= 0;
SET ##GLOBAL.GTID_PURGED=/*!80000 '+'*/ '';
SET ##SESSION.SQL_LOG_BIN = #MYSQLDUMP_TEMP_LOG_BIN;
There an issue with your generated file
You might face some issue if your generated dump.sql file is not proper. But here, I'm not gonna explain how to generate a dump file. But you can ask me (_)
Issue
Below statement or line your Dump file creating issue
DEFINER=username#`%
Simple Solution
The solution that you can workaround is to remove all the entries from SQL dump file and import data from the GCP console.
cat DUMP_FILE_NAME.sql | sed -e 's/DEFINER=`<username>`#`%`//g' > NEW-CLEANED-DUMP.sql
above command will help to remove all those lines from the dump file and create the new fresh dump file without Definer.
Try importing new file(NEW-CLEANED-DUMP.sql).
If you are on AWS RDS
You might see face issue, if your dump file is larger you can check the first 20 lines using
head -30 filename
once you can see output look for line and line number
SET ##SESSION.SQL_LOG_BIN= 0;
SET ##GLOBAL.GTID_PURGED=/*!80000 '+'*/ '';
SET ##SESSION.SQL_LOG_BIN = #MYSQLDUMP_TEMP_LOG_BIN;
we will remove these lines by line numbers for example 17,18,24 line number
sed -e '24d;17d;18d' file-name.sql > removed-line-file-name.sql
For importing database file in .sql.gz format, remove definer and import using below command
zcat path_to_db_to_import.sql.gz | sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' | mysql -u user -p new_db_name
Earlier, export database in .sql.gz format using below command.
mysqldump -u user -p old_db | gzip -9 > path_to_db_exported.sql.gz;
Import that exported database and removing definer using below command,
zcat path_to_db_exported.sql.gz | sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' | mysql -u user -p new_db
When you restore backup, Make sure to try with the same username for the old one and the new one.
I commented all the lines start with SET in the *.sql file and it worked.
If it helps, when I tried to restore a DB dump on my AWS MySQL RDS, I got this error:
ERROR 1227 (42000) at line 18: Access denied; you need (at least one of) the SUPER,
SYSTEM_VARIABLES_ADMIN or SESSION_VARIABLES_ADMIN privilege(s) for this operation
I didn't have to change the DEFINER or remove/comment out lines. I just did:
GRANT SESSION_VARIABLES_ADMIN ON *.* TO myuser#'myhost';
GRANT SYSTEM_VARIABLES_ADMIN ON *.* TO myuser#'myhost';
And I was able to do the restore.
None of the above solutions worked for me. I had to do the following:
Use the following flags with mysqldump:
mysqldump --databases <db1> <db2> --master-data=1 --single-transaction --order-
by-primary --foce -r all.sql -h<host> -u<user> -p<password>
Remove the line that looks like:
CHANGE MASTER TO MASTER_LOG_FILE='binlog.....
In my file, that was line #22, so I ran: sed -i '22d' all.sql
Import the data to your RDS:
mysql -h<host> -u<user> -p<password>
$ source all.sql
In my case (trying to execute a SQL file into AWS RDS) the beginning of my SQL statement looked like this:
DROP VIEW IF EXISTS `something_view`;
CREATE ALGORITHM=UNDEFINED DEFINER=`root`#`%` SQL SECURITY DEFINER VIEW `something_view`...
All I had to do to fix it was to remove ALGORITHM=UNDEFINED DEFINER='root'#'%' SQL SECURITY DEFINER part of the above statement.
So the new statement looks like this:
CREATE VIEW 'something_view' ...
* Answer may only be applicable to MacOS *
When trying to import a .sql file into a docker container, I encountered the error message:
Access denied; you need (at least one of) the SUPER privilege(s) for
this operation
Then while trying some of the other suggestions, I received the below error on my MacOS (osx)
sed: RE error: illegal byte sequence
Finally, the following command from this resource resolved my "Access Denied" issue.
LC_ALL=C sed -i old 's/\DEFINER\=`[^`]*`#`[^`]*`//g' fileName.sql
So I could import into the docker database with:
docker exec -i dockerContainerName mysql -uuser -ppassword table < importFile.sql
Hope this helps! :)
Issue in dump.
Please try to get dump by following way:
mysqldump -h databasehost --user=databaseusername --password --single-transaction databasename | sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' | gzip > /tmp/database.sql.gz
Then, try to import by following way:
zcat /tmp/database.sql.gz | mysql -h database_host -u username -p databasename
Need to set "on" server parameter "log_bin_trust_function_creators" on server side. This one you can easily find on left side blade if it is azure maria db.
When running the following MySQL code from a Linux (CentOS 7) terminal, the following command appears to be asking for the password for EVERY table in the loop. There are 500+ tables. It is not reasonable for me to type in a password 500+ times. How can I fix the code below so that it only asks for the password a couple times?
mysql -u root -p -Nse 'show tables' DATABASE_NAME | while read table; do mysql -u root -p -e "SET FOREIGN_KEY_CHECKS = 0; truncate table $table" DATABASE_NAME; done;
Edit
Is there a way to do this without having to put the password in the command line logs?
Put the password after the p (no space), say your password is PASSWORD
mysql -u root -pPASSWORD -Nse 'show tables' DATABASE_NAME | while read table; do mysql -u root -pPASSWORD -e "SET FOREIGN_KEY_CHECKS = 0; truncate table $table" DATABASE_NAME; done;
Consider re-writing this, so that it only uses two database sessions. One to produce a list of statements to execute into a file, and second (single) to process the statements. There's potential for something to go wrong with the first query session, so I'd be careful to separate them into two tasks.
In the first session, suppress the formatting, and redirect stdout to a file
SELECT 'set foreign_key_checks = 0;' AS stmt
SELECT CONCAT('TRUNCATE TABLE `',t.table_schema,'`.`',table_name,'`;') AS stmt
FROM information_schema.tables t
WHERE t.table_schema = 'mydatabase'
ORDER BY t.table_name ;
Verify that the file contains what you want.
Then (another useless use of cat) pipe the contents of that file into mysql
cat myfile | mysql -u me -p --database mydatabase
I have a backup script for my MySQL database, using mysqldump with the --tab option so it produces a .sql file for the structure and a .txt file (pipe-separated) for the content.
Some tables have foreign keys, so when I import it I'm getting the error:
ERROR 1217 (23000) at line 8: Cannot delete or update a parent row: a foreign key constraint fails
I know about using SET FOREIGN_KEY_CHECKS=0 (and SET FOREIGN_KEY_CHECKS=1 afterward). If I add those to each .sql file then the import works. But then obviously on the next mysqldump those get overwritten.
I also tried running it as a separate command, like below but the error comes back:
echo "SET FOREIGN_KEY_CHECKS=0" | mysql [user/pass/database]
[all the imports]
echo "SET FOREIGN_KEY_CHECKS=1" | mysql [user/pass/database]
Is there some other way to disable FK checks on the command line?
You can also use --init-command parameter of mysql command.
I.e.: mysql --init-command="SET SESSION FOREIGN_KEY_CHECKS=0;" ...
MySQL 5.5 Documentation - mysql options
You can do this by concatenating the string to the file inline. I'm sure there's an easier way to concatenate strings and files, but it works.
cat <(echo "SET FOREIGN_KEY_CHECKS=0;") imports.sql | mysql
I don't think you need to set it back to 1 since it's just one session.
Login to mysql command line:
mysql -u <username> -p -h <host_name or ip> Then run
1 SET FOREIGN_KEY_CHECKS=0;
2.use <database_name>
3 SOURCE /pathToFile/backup.sql;
4 SET FOREIGN_KEY_CHECKS=1;
5 exit
Just another one to do the same:
{ echo "SET FOREIGN_KEY_CHECKS=0;" ; cat imports.sql ; } | mysql
Another way with .gz files:
gunzip < backup.sql.gz | mysql --init-command="SET SESSION FOREIGN_KEY_CHECKS=0;" -u <username> -p
Based off the comments and answers, I ended up using this for a zipped database import with both InnoDB and MyISAM:
{ echo "SET FOREIGN_KEY_CHECKS=0;SET UNIQUE_CHECKS=0;" ; zcat dump.gz ; } | mysql
Simply, you can call any command from cmd, this way:
mysql -e "SET SESSION FOREIGN_KEY_CHECKS=1;"
Of course, you need to specify the username, password and host using -u, -p and -h