I know I can dump a single table using the "where clause" but I was wondering if its possible to dump a table and have all the linking records be dumped along with them if they belong to a certain account id?
All my tables are innodb and have been set up using foreign key constraints with cascade delete. If I delete the main table "account" where account_id = 1 then all the records that link to account_id of "1" will also be deleted.
So what I want is something similar in concept. I want to dump all the data for "Account_id=1" in all the tables that link to the "account" table in one command. If I do the following command I believe it will only dump the one table:
mysqldump -t -u [username] -p test account --where="account_id = 1"
Is there another way to dump on table with a where clause and automatically dump the data in liking tables without having to write separate dump commands for each table? ultimately I want to end up with a .sql file for each account like "account_1.sql", account_2.sql, etc.
I had put this question in my favorite list to see if someone comes with an idea, and as I was expecting no one did.
One rather funny way is to clone the DB, delete all not-required account ids (delete will cascade to all tables) and then dump the remaining (which will be all the account ids you require).
I was running through the same issue with MySQL, and DBIx::Class (an ORM in Perl). What I wanted to do was to clone a thousand of accounts (with obfuscated names and emails). I ended up writing a script to traverse the whole database through the foreign keys of a given user id and generate all the required insert statements in proper order.
Related
I am having a simple (I think) problem.
I am having a dump of MySQL database before disaster.
I need to import and replace from this dump only three columns from single table (in over 5000 rows, so that's why I am aware of doing it manually).
What should I do to do it and do not destroy anything else in working database?
I am just thinking that there is an option to skip columns during import and replace (UPDATE command I think) only these I need.
I will be thankful for help :(
------------ UPDATE ---------------
Okay, I used PHPMyAdmin and first I used SELECT query to get only three columns from whole table. Then I dumped it and I have SQL file with a dump containing only three columns.
Now, having this dump, can I (I do not know how to name it) edit or change something inside this typical MySQL dump file to make it possible to import these three columns with replace all the existing values?
I mean - to make existing column empty, then use maybe "INSERT INTO" but to whole table?
It is just over 2600 rows and I can not change it manually, so it would be better do use automation.
As far as I know, this is not possible. You can try to use sed in order to extract only the table you want - but specifically 3 columns would be complicated if not impossible.
Can I restore a single table from a full mysql mysqldump file?
The best way would be as #Ali said and just import it to a temp DB and then export the required data/columns to a new dump.
Restore DB to temp db then:
mysql> CREATE TABLE `tempTable` AS SELECT `columnYouWant` from `table`;
$> mysqldump yourDB tempTable > temp.sql
// Since you updated the question:
You want to probably use REPLACE INTO with your dump with the --replace option - though this will delete the row and replace it, not just the individual columns. If you want just the individual columns, the only way I can think of is with UDPATE. To use UPDATE, your options are:
Multi-table update
UPDATE mydb.mytable AS dest JOIN tempdb.mytable AS origin USING (prim_key)
SET dest.col1 = origin.col1,
dest.col2 = origin.col2,
...
Then drop the temp database.
Search/Replace Dump
Take your dump and use the INSERT ... ON DUPLICATE KEY UPDATE option to add it to the end of each insert line (assuming you exported/dumped individual insert commands).
I have a large database schema with over 170 tables, many of which depends upon others. For example, customers and employees both have a person_id which refers to a record in the people table.
I want to be able to generate a baseline.sql file which creates all these tables, with default values populated. I simply exported an existing database with everything properly formatted, but because the resulting baseline.sql file simply generates the tables in alphabetical order I end up with issues like customers and employees pointing to people who don't exist yet (because C<E<P, alphabetically).
Is there a way to export the database while considering the necessary table creation and population order?
I know foreign keys can be recursive or otherwise cause problems, but given my dataset does not have instances of these and the likely commonhood of this problem, I feel like there might be something easy out there before I reinvent the wheel.
Add this to the beginning of your sql file:
SET foreign_key_checks = 0;
And add this to the end:
SET foreign_key_checks = 1;
I'm trying to clean up a 7.5GiB table in MySQL by executing the following command:
DELETE FROM wp_commentmeta WHERE comment_id NOT IN (SELECT comment_id FROM wp_comments);
There is no foreign key between the two fields. Because of the size of (the second? both?) tables, attempting to execute this results in the following error:
Multi-statement transaction required more than 'max_binlog_cache_size'
bytes of storage; increase this mysqld variable and try again
The table is huge enough that I can't feasibly raise binlog_cache_size to accommodate this request. Short of dumping the two tables to disk and diffing their contents with a parser offline, is there some way to restructure the query to more-efficiently perform what I need to do?
Some of things I could do (but I wish to choose the correct/smart course of option):
Create a new table with a foreign key constraint between the two fields and insert into it, then delete the old and rename the new.
Using a MySQL derived/virtual table to create a view I could export then re-import
Dump the two tables and compare w/ a parser to generate a list of IDs to delete
Suggestions welcome, please!
Try this one:
DELETE wcm
FROM wp_commentmeta wcm
LEFT JOIN wp_comments wc ON wc.comment_id = wcm.comment_id
WHERE wc.comment_id IS NULL;
I'm using MySQL.
I have a "customers" table in database A (located on server).
I want to add them to the customer table in database B (located on localhost) without deleting anything.
I'm very new to SQL commands or MySQL in general so try to be the most explanatory as you can.
If you need more information about something I will edit the post and add it.
(I have MySQL workbench)
Thanks.
On server (DB A):
# Sets our database as default, so we wont have to write `database`.`table` in the consecutive queries.
# Replace *database* with your database name.
USE *database*;
# Makes a copy of the customers table - named customers_export.
# This copy contains all the fields and rows from the original table (without indexes),
# and if you want, you can add a WHERE clause to filter out some data
CREATE TABLE `customers_export` SELECT * FROM `customers`;
Since you are using mysql_workbench, Do a Data Export (in MANAGEMENT section) by choosing the relevant database and only the customers_export table.
On localhost (DB B):
Assuming the database name is the same (otherwise you will need to change the database name in the dump file), do a Data Import/Restore by selecting the dump file which we exported in the previous step.
This will create the customer_export table.
# Set as default
USE *database*;
# If the data you want to import contains *NO* collisions (same primary or unique values in both tables), the structure and table name is the same
INSERT INTO `customers` SELECT * FROM `customers_export`;
And we are done.
If it does have collisions, or you want to play change the column names, some values and etc - you will need to either modify the select statement or update the customers_export table to suit your needs.
Also, back up the customers table on the second server - in case something goes wrong with the insert.
Finally - drop the customers_export table on both servers.
Just use this on localhost:
mysqldump -u YOUR_DATABASE_USER -p YOUR_DATABASE_PASS -h YOUR_SERVER_IP databaseA customers > customer.sql
mysql -u YOUR_DATABASE_USER -p YOUR_DATABASE_PASS databaseB < customer.sql
PD: If want some explanation just tell me
I need to create a dump file such that when i execute it, i have no depency issues
tables run before queries
parent tables before child tables etc
no failed insert due to foreign key failures
Two tables may refer each other with FK so not always possible to create and insert "parent first".
Use mysqldump . It will disable foreign keys before importing data and enable afterward. (this is also much faster).