As a superadmin I want to take backup of whole database with giving where condition. Like where accoutid = 88. Is it possible to do this?
Where clauses apply to tables, not databases. So you can backup tables with where clauses. Since the mysqldump syntax excludes tables not explicitly listed in the mysqldump command, you won't get the entire database unless you list every table in the db explicitly (which you could do).
Here is the mysqldump documentation.
This answer will explain what to do.
Try something like this,
mysqldump --host=localhost --user=db_user --password= db_pass db_name table_name --no_create_info --where=accoutid=88 > data.sql
Related
This may seem like a very dumb question but I didn't learn it in any other way and I just want to have some clarification.
I started to use MySQL a while ago and in order to test various scenarios, I back up my databases. I used MySQL dump for that:
Export:
mysqldump -hSERVER -uUSER -pPASSWORD --all-databases > filename.sql
Import:
mysql -hSERVER -uUSER -pPASSWORD < filename.sql
Easy enough and it worked quite well up until now, when I noticed a little problem with this "setup": It does not fully "reset" the databases and tables. If, for example, there is an additional table added AFTER a dump file has been created, that additional table will not disappear if you import the same dump file. It essentially only "corrects" tables already there and recreates any databaes or tables missing, but does not remove any additional tables, which happen to have names that are not in the dump file.
What I want to do is to completely reset all the databases on a server when I import such a dump file. What would be the best solution? Is there a special import function reserved for that purpose or do I have to delete the databases myself first? Or is that a bad idea?
You can use the parameter --add-drop-database to add a "drop database" statement to the dump before each "create database" statement.
e.g.
mysqldump -hSERVER -uUSER -pPASSWORD --all-databases --add-drop-database >filename.sql
see here for details.
There's nothing magic about the dump and restore processes you describe. mysqldump writes out SQL statements that describe the current state of the database or databases you are dumping. It has to fetch a list of tables in each database you're dumping, then it has to read the tables one by one and write them out as SQL. On databases of any size, this takes time.
So, if you create a new table while mysqldump is running, it may not pick up that new table. Similarly, if your application software changes contents of tables while mysqldump is running, those changes may or may not show up in the backup.
You can look at the .sql files mysqldump writes out to see what they have picked up. If you want to be sure that your dumped .sql files are perfect, you need to run mysqldump on a quiet server -- one where nobody is running data definition language.
MySQL hot backup solutions are available. You may need to look into that.
The OP may want look into
mysql_install_db
if they want a fresh start with the post-install default
settings before restoring one or more dumped DBs. For
production servers, another useful script is:
mysql_secure_installation
Also, they may prefer to dump the DB(s) they created separately:
mysqldump -hSERVER -uUSER -pPASSWORD --database foo > foo.sql
to avoid inadvertently changing the internal DBs:
mysql, information_schema, performance_schema.
I have a MySQL database say test. It contains 10 tables. I know I can do describe <table_name> to get skeleton of that table. But I have to do that for each table individually.
This is my problem. Any query or script I can write to get the skeleton of all those tables at time?
Try Like this
SELECT * FROM information_schema.columns Where TABLE_SCHEMA='test';
mysqldump can be told to skip data and dump only table schema.
mysqldump --no-data test
Use options -u <user> to connect as <user> and -p to ask for password. Maybe you also want --compact to be less verbose and possibly other. Many adjustments to the contents can be made using other options.
I need to restore a dumped database, but without discarding existing rows in tables.
To dump I use:
mysqldump -u root --password --databases mydatabase > C:\mydatabase.sql
To restore I do not use the mysql command, since it will discard all existing rows, but instead mysqlimport should do the trick, obviously. But how? Running:
mysqlimport -u root -p mydatabase c:\mydatabase.sql
says "table mydatabase.mydatabase does not exist". Why does it look for tables? How to restore dump with entire database without discarding existing rows in existing tables? I could dump single tables if mysqlimport wants it.
What to do?
If you are concerned with stomping over existing rows, you need to mysqldump it as follows:
MYSQLDUMP_OPTIONS="--no-create-info --skip-extended-insert"
mysqldump -uroot --ppassword ${MYSQLDUMP_OPTIONS} --databases mydatabase > C:\mydatabase.sql
This will do the following:
remove CREATE TABLE statements and use only INSERTs.
It will INSERT exactly one row at a time. This helps mitigate rows with duplicate keys
With the mysqldump performed in this manner, now you can import like this
mysql -uroot -p --force -Dtargetdb < c:\mydatabase.sql
Give it a Try !!!
WARNING : Dumping with --skip-extended-insert will make the mysqldump really big, but at least you can control each duplicate done one by one. This will also increase the length of time the reload of the mysqldump is done.
I would edit the mydatabase.sql file in a text editor, dropping the lines that reference dropping tables or deleting rows, then manually import the file normally using the mysql command as normal.
mysql -u username -p databasename < mydatabase.sql
The mysqlimport command is designed for dumps created with the mysql command SELECT INTO OUTFILE rather than direct database dumps.
This sounds like it is much more complicated than you are describing.
If you do a backup the way you describe, it has all the records in your database. Then you say that you do not want to delete existing rows from your database and load from the backup? Why? The reason why the backup file (the output from mysqldump) has the drop and create table commands is to ensure that you don't wind up with two copies of your data.
The right answer is to load the mysqldump output file using the mysql client. If you don't want to do that, you'll have to explain why to get a better answer.
How do I remove selective tables from a MySQL dump. I would like to remove the following tables from my dump file. How should I do this.
"DATABASECHANGELOG", "DATABASECHANGELOGLOCK",
I guess you could take a mysqldump using the following command to ignore the specific tables on making dump , so you could step away from risks of removing the tables after taking the dump
mysqldump -u username -p database_name --ignore-table=database_name.table1 --ignore-table=database_name.table2 > test.sql
Ok, so I'm in need to restore a table and I do:
mysqldump --opt database table_name < table_name.sql
I hit enter and Done! Well, not really, when I go to see if there is anything on the table it show 0 records.
I have look into the table_name.sql and I see two records.
What am I doing wrong?
mysqldump is the wrong command for restoring from a backup.
You need to run mysql, as in, the mysql client. It's generally something like this:
mysql -u username -p database_name < sqlfile.sql
That will use your file as input to the mysql client, which subsequently executes the SQL.
mysqldump just exports the data to an SQL script. You can restore with this:
mysql db < file.sql