Export rows with foreign keys to another database - mysql

I'm working with msyql and symfony2. I have a database with 15 tables. I want to export all data starting from a row in the main table, go to each table I have and select all data related to that and move it to another database (which is not empty) and also not loose foreign key relations.
I will try to explain it with a very basic example.
screenshot here
So I want to select the user with id 1 and from that to select all threads connected to that user and from those threads I want to select all their posts connected to them. All this data I want to export it to another database (this is not empty also) and keep the data integrity with all its relations.
Is there a tool or how can I create an export script like this?

Related

MySql - Can I add a label to a table and retrieve using sql query

I have a requirement to provide a REST endpoint to create/delete tables for a privileged user.
When user makes a request to create table 'xyz', I create table with prefix "user_" and return response saying
'user_xyz' is created.
This way I know what tables are candidates for deletion when a request is made.
But I wish to create table "xyz" as requested by user and would like to add some label like "deletable" so that I can
query to find if a table can be deleted.
One option is adding a comment for table and but I have to query information_schema. Comment does not sound very correct.
Can this problem be solved using any other approach when I use MySql database.
I don't think you have much other options other than the one you suggest,
prefixing/suffixing table name
use table comment
create meta table with UserId(has the advantage of foreign keys)
and TableName fields (has the disadvantage of integrating the table
name with the actual very table as it can change without this
metatable being updated)
create separate schema for each user

delete entry in DB without reference

We have the below requirement:
Currently, we get the data from source (another server, another team, another DB) into a temp DB (via batch jobs) and after we get data into our temp DB, we process the data, transform and update our primary DB with the difference (i.e. the records that changed or the newly added records).
Source->tempDB (daily recreated)->delta->primaryDB
Requirement:
- To delete the data in primary DB once its deleted in source.
Ex: suppose a record with ID=1 is created in source, it comes to temp DB and eventually makes it to primary DB. When this record is deleted in source, it should get deleted in primary DB also.
Challenge:
How do we delete from primary DB when there is nothing to refer to in temp DB (since the record is already deleted in source, nothing comes in tempDB).
Naive approach:
- We can clean up primary DB, before every transform and load afresh. However, it takes a significant amount of time to clean up and populate primary DB everytime.
You could create triggers on each table that fills a history table with deleted entries. Synch that over to your tempDB and use it to delete stuff i your primary DB.
You either want one "delete-history-table" per table or a combined history table that also includes the tablename which triggered the deletion.
You might want to look into SQL Compare or other tools for synching tables.
If you have access to tempDB and primeDB (same server or linked servers) at the same time you could also try a
delete *
from primeBD.Tablename
where not exists (
select 1
from tempDB.Tablename where id = primeDB.Tablename.Id
)
which will perform awfully - ask your db designers.
In this scenorio if TEMPDB & Primary DB have no direct reference then can use track event notification on database level .
Here is the link i got for same :
https://www.mssqltips.com/sqlservertip/2121/event-notifications-in-sql-server-for-tracking-changes/

database table change shows all associated tables

in database, maybe one table(assume is tableA) associate multiple tables, so if change the structure of this table(tableA),e.g. delete one associated column, then all other associated tables need to do change.
hence is there a tool can show what other tables need to be changed if I modify the tableA?
You can use for example MySQL Workbench -> Reverse Engineer to see how tables are connected to each other; that assumes that the database has proper primary and foreign keys.

Importing new data to master database versus temporary database?

I am designing a MySQL database for a new project. I will be importing 50-60 MB of data on a daily basis.
There will be a main table with a primary key. Then there will be child tables with their own primary key and a foreign key pointing back to the main table.
New data has to be parsed from a giant text file and then some minor manipulations made prior to importing into the master database. The parsing and import operation may involve a significant amount of troubleshooting so I want to import new data into a temporary database and ensure its integrity before adding to the master.
For this reason, I thought initially to parse and import new data into a separate, temporary database each day. In this way, I would be able to inspect the data prior to adding to the master and at the same time I would have each day's data stored as a separate database should I ever need to rebuild the master later on from the individual temporary databases.
I am considering the use of primary keys / foreign keys with the InnoDB engine in order to maintain relational integrity across tables. This means I have to worry about auto-increment ids (primary key) not having any duplicates when I go to import the new data each day.
So, given this situation, what would be best?
Make a copy of the master and import directly into the copy of the master each day. Replace existing master with the new copy.
Import new data into a temporary database each day but change auto-increment start value of the primary keys to be greater than the maximum in the master. Would I then also change the auto-increment values for the primary keys for all tables (main table and its children)?
Import new data into a temporary database each day, not worrying about the primary key values. Find some other way to merge the temporary database with the master without collisions of the primary keys? If using this strategy, how can I update the primary key in the main table for the new data while making sure all the relationships with the child tables remain correct?
I'm not sure this is as complicated as you are making it?
Why not just do this:
Import raw data into temporary table (why does it have to be a separate database?)
Run your transformations/integrity checks on the temporary table.
When the data is good, insert it directly into the master table.
Use auto incrementing ids on the master table that are not dependent on your data being imported. That allows you to have a unique id and the original ids that might have existed in your import.
Add a field to your master table(s) that gives you a record of which import the records came from.
In addition to copying the data to your master table, make a log that ties back to the data you merged. Helps you back out the data if you find it's wrong/bad and gives you an audit trail.
In the end just set up a sandbox database, write a bunch of stored procedures and test the crap out of it. =)

import related tables from Access into SQL Server 2008

In SQL Server 2008 I had remade the database structure similar to Access. I need to import a couple of related tables but I am worried that the foreign keys won't match with the autonumber fields from the related tables.
You have some options here:
If you export the table to SQL Server, all the data will make it through properly and then you can set your PKs and FKs
Create the Table structure with an IDENTITY column and use SET IDENTITY_INSERT to put in the values you want into the Identity column.
Without knowing more details about your table structures and locations, I can only tell you generic things like
You will have to match the keys up manually so that the PK-FK references remain the same.
If you need to match the old access ids to the new autogenerated ids in an existing table, this is something you needed to do at the time of moving the data from the orginal table unless you happened to store the access ids. Usually I do some type of a cross matching table with the old id and the new id as part of the import process. Then you use this table to match to the realted tables to update their ids. If you didn;t do this and the ids are differnt, you will have to find a way to match them to the orginal access table first before you can import the related tables. I hope your table has a natural key in that case.
If the tables are the same you could use the rather verbosely named “Microsoft SQL server Migration Assistant 2008 for Access”. This will allow you to bring over the data whilst keeping the same keys