There are two types of insertion accrued in my table
1.From user
2.Through migration (using Yii2 php framework)
This table id's are used as foreign keys in other tables so I want that these id's should not be changed.
The problem is that I have two different environments that are a local database and live database (the application is live).
let there is one record in a local database and 1 in live server locally I select my account as a user and insert 1 more record and than insert then third one record through migration.
Now if this migration runs on a live server, the primary key 2 will be inserted because it had one record while on local it's key is 3 because 1 record was inserted by a user.
Kindly help me how to handle this scenario
now consider that this table having 8 records in one database(on live server) and 9 records on another server(database).i want to insert new record whose id should be same in both databases and insertion is through migration where same migration is applied on both databases(server).As there any way to set different ranges of primary key which is auto increment or any solution appreciated
Related
This question already has answers here:
How to reset Postgres' primary key sequence when it falls out of sync?
(33 answers)
Closed 1 year ago.
I have a quite a interesting problem regarding table primary key when inserting new rows to Postgres DB using Hasura.
In a short summary this is what happened:
There was old backend for the app that I am currently developing which used MySql database and my task was to move all of data to new Hasura/Postgress DB, so i wrote scripts that move that data and it went fine. Script inserted all of the users data including their primary keys because of user relationship with other tables.
Problem occurred when i tried to insert new users with Hasura mutation:
Uniqueness violation. duplicate key value violates unique constraint
"users_pkey"
What I assume happened is: My PK on users is not default type integer/auto-increment ,
it is:
id - integer, primary key, unique, default: nextval('users_id_seq'::regclass)
and when i try to insert new row it tries to assign it latest PK that Hasura remembers it inserted and user with that PK is already imported from MySql DB.
Is there some way to edit my table PK so it is integer/auto-increment or someone has some creative solution.
Thanks in advance!
You need to adjust the value of the sequence seed with this SQL command :
ALTER SEQUENCE users_id_seq RESTART WITH ...`
The value that you must give to this SQL command is the MAX(id) + 1 of your table.
I have 2 databases from Wordpress website.
There was happenned issue and 50% of my posts dissapeared.
I have database 1 copy from 03.03.21
And existing database 2 of website from 24.03.21
So in database 1 i have many posts thats was deleted
And the database 2 has some new posts that not exist in older database 1
Is there any software or a way to merge these 2 database.
To compare databases and add entries to the newer database that are in the older database?
I could do this manullay but one post has entries in a many tables and its gonna be hard to recover deleted posts
There is no easy solution but you could try to make a "merge" locally for testing purposes.
Here's how I would do it, I can't guarrantee it will work.
1. Load the oldest backup into the server, let's say in a database named merge_target.
2. Load the 2nd backup (the most recent one) into the same server, let's say in a merge_source database.
3. Define a logical order to execute the merge for each table, this depends on the presence of foreign keys:
If a table A has a foreign key referencing table B, then you will need to merge table B before table A.
This may not work depending on your database structure (and I never worked with WordPress myself).
4. Write and execute queries for each table, with some rules:
SELECT from the merge_source database
INSERT into the merge_target database
if a row already exists in merge_target (i.e. they have the same primary key or unique key), you can use MySQL features depending on what you want to do:
INSERT ON DUPLICATE KEY UPDATE if the existing row should be updated
INSERT IGNORE if the row should just be skipped
REPLACE if you really need to delete and re-insert the row
This could look like the following query (here with ON DUPLICATE KEY UPDATE):
INSERT INTO merge_target (col_a, col_b, col_c)
SELECT
col_a
, col_b
, col_c
FROM merge_source
ON DUPLICATE KEY UPDATE
merge_target.col_b = merge_source.col_b
Documentation:
INSERT ... SELECT
ON DUPLICATE KEY UPDATE
REPLACE
INSERT IGNORE is in the INSERT documentation page
Not sure it will help but I wrote a database migration framework in PHP, you can still take a look: Fregata.
We have the below requirement:
Currently, we get the data from source (another server, another team, another DB) into a temp DB (via batch jobs) and after we get data into our temp DB, we process the data, transform and update our primary DB with the difference (i.e. the records that changed or the newly added records).
Source->tempDB (daily recreated)->delta->primaryDB
Requirement:
- To delete the data in primary DB once its deleted in source.
Ex: suppose a record with ID=1 is created in source, it comes to temp DB and eventually makes it to primary DB. When this record is deleted in source, it should get deleted in primary DB also.
Challenge:
How do we delete from primary DB when there is nothing to refer to in temp DB (since the record is already deleted in source, nothing comes in tempDB).
Naive approach:
- We can clean up primary DB, before every transform and load afresh. However, it takes a significant amount of time to clean up and populate primary DB everytime.
You could create triggers on each table that fills a history table with deleted entries. Synch that over to your tempDB and use it to delete stuff i your primary DB.
You either want one "delete-history-table" per table or a combined history table that also includes the tablename which triggered the deletion.
You might want to look into SQL Compare or other tools for synching tables.
If you have access to tempDB and primeDB (same server or linked servers) at the same time you could also try a
delete *
from primeBD.Tablename
where not exists (
select 1
from tempDB.Tablename where id = primeDB.Tablename.Id
)
which will perform awfully - ask your db designers.
In this scenorio if TEMPDB & Primary DB have no direct reference then can use track event notification on database level .
Here is the link i got for same :
https://www.mssqltips.com/sqlservertip/2121/event-notifications-in-sql-server-for-tracking-changes/
I'm working with msyql and symfony2. I have a database with 15 tables. I want to export all data starting from a row in the main table, go to each table I have and select all data related to that and move it to another database (which is not empty) and also not loose foreign key relations.
I will try to explain it with a very basic example.
screenshot here
So I want to select the user with id 1 and from that to select all threads connected to that user and from those threads I want to select all their posts connected to them. All this data I want to export it to another database (this is not empty also) and keep the data integrity with all its relations.
Is there a tool or how can I create an export script like this?
Hy!
I have the following stupid question?
First an example to understand my point of view.
I have a mysql db, Innodb tables with foreign keys between them,
I currently work on a localhost machine
When I delete the last inserted record from a table let's say with primary auto-increment key set to
100 then the next primary key given by mysql is 101 but if I restart the mysql server (Apache server) then in the same table the primary key for the next record is reset to 100.
I have to mention that I set a trigger for the table I deleted the record from, to copy the deleted record to archive table before delete.
Now after mysql server was restarted if a new record is inserted it will get the primary key 100,and when I try to delete it a conflict appears because in the archive table there is already a primary key 100.
I have to keep the references between deleted key.
Now this happens only if mysql server is restarted.
The question is if this problem can be solved in case let's say that after the web application is deployed to a shared server host a server restart appears(I suppose).
I want to mention that the data base is moe complex the one table. I absolutely need to keep the integrity between archive tables after data is moved.