I have a MySQL database say test. It contains 10 tables. I know I can do describe <table_name> to get skeleton of that table. But I have to do that for each table individually.
This is my problem. Any query or script I can write to get the skeleton of all those tables at time?
Try Like this
SELECT * FROM information_schema.columns Where TABLE_SCHEMA='test';
mysqldump can be told to skip data and dump only table schema.
mysqldump --no-data test
Use options -u <user> to connect as <user> and -p to ask for password. Maybe you also want --compact to be less verbose and possibly other. Many adjustments to the contents can be made using other options.
Related
As a superadmin I want to take backup of whole database with giving where condition. Like where accoutid = 88. Is it possible to do this?
Where clauses apply to tables, not databases. So you can backup tables with where clauses. Since the mysqldump syntax excludes tables not explicitly listed in the mysqldump command, you won't get the entire database unless you list every table in the db explicitly (which you could do).
Here is the mysqldump documentation.
This answer will explain what to do.
Try something like this,
mysqldump --host=localhost --user=db_user --password= db_pass db_name table_name --no_create_info --where=accoutid=88 > data.sql
I am trying to create a table on my local machine which has the same description as some other table on a remote machine. I just want to create the table with same columns, don't worry about the row data.
The table has around 150 columns, so its very tedious to write the CREATE TABLE command. Is there an elegant way of doing this?
Are you referring to something like:
SHOW CREATE TABLE table_name;
which shows an sql query on how to create the certain table.
You can get the description of your table on host1 with DESCRIBE by calling something like
DESCRIBE "myTable";
This will need some manual efforts to build up the CREATE TABLE-command, but has the advantage to get all important things on one view.
Anothe way would be to unload the structur of you database of host 1 you can use mysql_dump. To do so call mysql_dump similar to
mysql_dump -d [options_like_user_and_host] databasename tabelname;
You will get a file which nearly directly can be used for your host 2. Atttention: Some relations e.g. to other tables might not be included.
You can use mysqldump to export database table structure only. Use -d or --no-data to achieve it. Official document. For example
mysqldump -d -h hostname -u yourmysqlusername -p databasename > tablestruc.sql
If your local machine is accessible from remote, then change the hostname and execute this command from remote machine.
Is there a way to restrict certain tables (ie. start with name 'test') from the mysqldump command?
mysqldump -u username -p database \
--ignore-table=database.table1 \
--ignore-table=database.table2 etc > database.sql
But the problem is, there is around 20 tables with name start with 'test'. Is there any way to skip these tables(without using these long command like "--ignore-table=database.table1 --ignore-table=database.table2 --ignore-table=database.table3 .... --ignore-table=database.table20"?
And is there any way to dump only schema but no data?
Unfortunately mysqldump requires table names to be fully qualified so you can't specify a parameter as a regex pattern.
You could, however, use a script to generate your mysqldump by having it connect to the information_schema and list all the tables using something like:
SELECT TABLE_NAME, TABLE_SCHEMA
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA NOT IN ('INFORMATION_SCHEMA', 'mysql', 'PERFORMANCE_SCHEMA');
And then having it generate --ignore-table parameters for all table names that match the regex of ^test.
To dump only the schema and no data you can use --no-data=true as a parameter.
If you want to get everything for all of the non test tables but only the schema for another table then you would need to use two separate mysqldump commands (one for the ignore-table for all test tables plus the schema only one and another for only the schema of the schema only table) with the second one appending to the output file by using the >> append operator.
So your resulting script might generate something like:
mysqldump -u root -ptoor databaseName --ignore-table=testTable1 --ignore-table=testTable2 --ignore-table=testTable3 --ignore-table=schemaOnlyTable > mysqldump.sql
mysqldump -u root -ptoor databaseName schemaOnlyTable --no-data=true >> mysqldump.sql
I need to restore a dumped database, but without discarding existing rows in tables.
To dump I use:
mysqldump -u root --password --databases mydatabase > C:\mydatabase.sql
To restore I do not use the mysql command, since it will discard all existing rows, but instead mysqlimport should do the trick, obviously. But how? Running:
mysqlimport -u root -p mydatabase c:\mydatabase.sql
says "table mydatabase.mydatabase does not exist". Why does it look for tables? How to restore dump with entire database without discarding existing rows in existing tables? I could dump single tables if mysqlimport wants it.
What to do?
If you are concerned with stomping over existing rows, you need to mysqldump it as follows:
MYSQLDUMP_OPTIONS="--no-create-info --skip-extended-insert"
mysqldump -uroot --ppassword ${MYSQLDUMP_OPTIONS} --databases mydatabase > C:\mydatabase.sql
This will do the following:
remove CREATE TABLE statements and use only INSERTs.
It will INSERT exactly one row at a time. This helps mitigate rows with duplicate keys
With the mysqldump performed in this manner, now you can import like this
mysql -uroot -p --force -Dtargetdb < c:\mydatabase.sql
Give it a Try !!!
WARNING : Dumping with --skip-extended-insert will make the mysqldump really big, but at least you can control each duplicate done one by one. This will also increase the length of time the reload of the mysqldump is done.
I would edit the mydatabase.sql file in a text editor, dropping the lines that reference dropping tables or deleting rows, then manually import the file normally using the mysql command as normal.
mysql -u username -p databasename < mydatabase.sql
The mysqlimport command is designed for dumps created with the mysql command SELECT INTO OUTFILE rather than direct database dumps.
This sounds like it is much more complicated than you are describing.
If you do a backup the way you describe, it has all the records in your database. Then you say that you do not want to delete existing rows from your database and load from the backup? Why? The reason why the backup file (the output from mysqldump) has the drop and create table commands is to ensure that you don't wind up with two copies of your data.
The right answer is to load the mysqldump output file using the mysql client. If you don't want to do that, you'll have to explain why to get a better answer.
Is it possible to take daily backup (Only the records per day) for particular table in DB.Once the backup is done need to delete those records from table.
Is this scenario will work without using scripting language like php,perl...?
the easiest way is to use script
1) select records you need
2) put them in some form of dump
3) run delete from table with parameters you need
other constructions (triggers with stored procedures or else), IMHO, will shoot you in leg eventually
mysqldump -u root -p db_name > db_backup.sql
using above command we can back up database and if you want to take the back up of a seleted table you can use : mysqldump -c -u -p db_name table_name > table_backup.sql
And to drop the db use drop database db-name and to drop a specific table use drop table table-name