My question :
How can i sync 2 mysql databases (offline local database with master online database) ?
problem is database is relational,and id as always is auto incremenate, so if i just sync using insert it will mess with my referals.
this for a clinic management app i made, problem is currently its on
server but sometimes internet connection goes down/slow on my users
clinic, so i need to let him work on offline mode (store every thing
to local db) and manually sync (bi-directional) with remote database
at end of day.
so basically each clinic should have its own local db and let them all sync to central db
example of tables.
db.Cental.users
|id|user|clinic |
|01|demo|day care|
|02|nurs|er |
|03|demX|day care|
db.day care.users
|id|user|clinic |
|01|demo|day care|
|02|demX|day care|
(note id doesnt necessarily match between central and local db, yet structure of db on both is identical)
example:
database info:
each user have many visits,plugins.
each visit contain 1user as patient_id, and 1user as doctor_id
each plugin have one user, many inputs
plugin_inputs have one plugin
i have 2 databases 1 on the server and other hosted locally -for offline mode-
what i want is to be able to sync locally db with online one, but since i have more than 1 user, more than one local db so each one will have nearly same id's while online db should contain all of them combined.
so how can i sync them together ?
i user php/mysql(online)/sqlite(local)
There are a couple of things you can do.
Firstly, you could consider a composite primary key for the user table - for instance, you could include "Clinic" in the primary key. That way, you know that even if the auto increment values overlap, you can always uniquely identify the user.
The "visit" table then needs to include patient_clinic and doctor_clinic in the foreign keys.
Alternatively - and far dirtier - you can simply set the auto increment fields for each clinic to start at different numbers: ALTER TABLE user AUTO_INCREMENT = 10000; will set the first auto_increment for the user table to 10000; if the next clinic sets it at 20000 (or whatever), you can avoid overlaps (until you exceed that magic number, which is why it's dirty).
I'd be tempted to redesign the application architecture to have an API at the "remote" end and a job that runs at the end of each day at the "local" end to post all the "local" data to the "remote" API.
This way you could have the "local" job create a JSON message similar to this:
{ user_visit:
{
user_id: 1,
doctor_id: 1,
plugins: [
{
plugin_name: name,
data: {
data_field1: "VALUE",
data_field2: "VALUE2",
data_fieldxx: "VALUExx"
}
]
}
}
And send each message to the API. In the API, you would then iterate over the JSON message, reconstructing the relationships and inserting them into the "remote" database accordingly.
Create the "id" and "id_offline" column in the local database. make all relationships between tables with "id_offline" and use automatic increment.
The "id" will match the MySQL "id" and will start with a null value.
When sending the data to the server, a check will be made if the "id" is equal to null, if it is, "insert" if not "update", after that use the MySQL "RETURNING" function to return the automatically generated id and return this information to the device that made the request.
When the device receives the new ids, update the "id" column of the local database to be the same as MySQL.
Related
I am using redis to store it in userId:refreshToken.
However, this method prevents one user from logging into multiple devices.
So I try to change it to the format of userId_accessToken:refreshToken.
However, this method should be del->insert whenever the access token or refresh token is changed.
So I'm debating between two methods.
Save it in redis as above.
Save in DB as [id, userId, refreshToken, accessToken, expDate].
In mysql, I will create a cron that will delete it after the expDate.
In redis, It will apply ttl when creating it.
What's a better way?
Our server's memory is 3969424.
Database uses rds and mysql.
If there's another good way, that's great too!
I would choose whichever is simpler to implement
and another thought, you can use MyRocks engine to automatic delete old keys
MyRocks-ttl
CREATE TABLE t1 (a INT, b INT, c INT, PRIMARY KEY (a), KEY(b)) ENGINE=ROCKSDB COMMENT "ttl_duration=3600;";
In the above examples, we set ttl_duration to 3600 meaning that we expect rows older than 3600 seconds to be removed from the database.
There are two types of insertion accrued in my table
1.From user
2.Through migration (using Yii2 php framework)
This table id's are used as foreign keys in other tables so I want that these id's should not be changed.
The problem is that I have two different environments that are a local database and live database (the application is live).
let there is one record in a local database and 1 in live server locally I select my account as a user and insert 1 more record and than insert then third one record through migration.
Now if this migration runs on a live server, the primary key 2 will be inserted because it had one record while on local it's key is 3 because 1 record was inserted by a user.
Kindly help me how to handle this scenario
now consider that this table having 8 records in one database(on live server) and 9 records on another server(database).i want to insert new record whose id should be same in both databases and insertion is through migration where same migration is applied on both databases(server).As there any way to set different ranges of primary key which is auto increment or any solution appreciated
We have the below requirement:
Currently, we get the data from source (another server, another team, another DB) into a temp DB (via batch jobs) and after we get data into our temp DB, we process the data, transform and update our primary DB with the difference (i.e. the records that changed or the newly added records).
Source->tempDB (daily recreated)->delta->primaryDB
Requirement:
- To delete the data in primary DB once its deleted in source.
Ex: suppose a record with ID=1 is created in source, it comes to temp DB and eventually makes it to primary DB. When this record is deleted in source, it should get deleted in primary DB also.
Challenge:
How do we delete from primary DB when there is nothing to refer to in temp DB (since the record is already deleted in source, nothing comes in tempDB).
Naive approach:
- We can clean up primary DB, before every transform and load afresh. However, it takes a significant amount of time to clean up and populate primary DB everytime.
You could create triggers on each table that fills a history table with deleted entries. Synch that over to your tempDB and use it to delete stuff i your primary DB.
You either want one "delete-history-table" per table or a combined history table that also includes the tablename which triggered the deletion.
You might want to look into SQL Compare or other tools for synching tables.
If you have access to tempDB and primeDB (same server or linked servers) at the same time you could also try a
delete *
from primeBD.Tablename
where not exists (
select 1
from tempDB.Tablename where id = primeDB.Tablename.Id
)
which will perform awfully - ask your db designers.
In this scenorio if TEMPDB & Primary DB have no direct reference then can use track event notification on database level .
Here is the link i got for same :
https://www.mssqltips.com/sqlservertip/2121/event-notifications-in-sql-server-for-tracking-changes/
Finally reached data migration part of my Project and now trying to move data from MySQL to SQL Server.
SQL Server has new schema (mapping is not always one to one).
I am trying to use SSIS for the conversion, which I started learning today morning.
We have customer and customer location table in MySQL and equivalent table in SQL Server. In SQL server all my tables now have surrogate key column (GUID) and I am creating the same in Script Component.
Also note that I do have a primary key in current mysql tables.
What I am looking for is how I can add child records to customer location table with newly created guid as parent key.
I see that SSIS have Foreach loop container, is this of any use here.
if not another possibility that I can think of is create two Data Flow Task and [somehow] just before the master data is sent to Destination Component [Table] on primary dataflow task , add a variable with newly created GUID and another with old PrimaryID, which will be used to create source for DataTask Flow for child records.
May be to simplyfy , this can also be done once datatask for master is complete and then datatask for child reads this master data and inserts child records from MySQL to SQL Server table. This would though mean that I have to load all my parent table records back into memory.
I know this is all too confusing and it is mainly because I am very confused :-(, to bear with me and if you want more information let me know.
I have been through may links that i found through google search but none of them really explains( or I was not able to uderstand) how the process is carried out.
Please advise
regards,
Mar
** Edit 1**
after further searching and refining key words i found this link in SO and going through it to see if it can be used in my scenario
How to load parent child data found in EDI 823 lockbox file using SSIS?
OK here is what I would do. Put the my sql data into staging tables in sql server that have identity columns set up and an extra column for the eventual GUID which will start out as null. Now your records have a primary key.
Next comes the sneaky trick. Pick a required field (we use last_name) and instead of the real data insert the value form the id field in the staging table. Now you havea record that has both the guid and the id in it. Update the guid field in the staging table by joing to it on the ID and the required field you picked out. Now update the last_name field with the real data.
To avoid the sneaky trick and if this is only a onetime upload, add a column to your tables that contains the staging table id. Again you can use this to get the guid for inserting to related tables. Then when you are done, drop the extra column.
You are aware that there are performance issues involved with using GUIDs? Make sure not to make them the clustered index (as the PK they will be by default unless you specify differntly) and use newsequentialid() to populate them. Why are you using GUIDs? If an identity would work, it is usually better to use it.
Is there a way to keep a timestamped record of every change to every column of every row in a MySQL table? This way I would never lose any data and keep a history of the transitions. Row deletion could be just setting a "deleted" column to true, but would be recoverable.
I was looking at HyperTable, an open source implementation of Google's BigTable, and this feature really wet my mouth. It would be great if could have it in MySQL, because my apps don't handle the huge amount of data that would justify deploying HyperTable. More details about how this works can be seen here.
Is there any configuration, plugin, fork or whatever that would add just this one functionality to MySQL?
I've implemented this in the past in a php model similar to what chaos described.
If you're using mysql 5, you could also accomplish this with a stored procedure that hooks into the on update and on delete events of your table.
http://dev.mysql.com/doc/refman/5.0/en/stored-routines.html
I do this in a custom framework. Each table definition also generates a Log table related many-to-one with the main table, and when the framework does any update to a row in the main table, it inserts the current state of the row into the Log table. So I have a full audit trail on the state of the table. (I have time records because all my tables have LoggedAt columns.)
No plugin, I'm afraid, more a method of doing things that needs to be baked into your whole database interaction methodology.
Create a table that stores the following info...
CREATE TABLE MyData (
ID INT IDENTITY,
DataID INT )
CREATE TABLE Data (
ID INT IDENTITY,
MyID INT,
Name VARCHAR(50),
Timestamp DATETIME DEFAULT CURRENT_TIMESTAMP)
Now create a sproc that does this...
INSERT Data (MyID, Name)
VALUES(#MyID,#Name)
UPDATE MyData SET DataID = ##IDENTITY
WHERE ID = #MyID
In general, the MyData table is just a key table. You then point it to the record in the Data table that is the most current. Whenever you need to change data, you simply call the sproc which Inserts the new data into the Data table, then updates the MyData to point to the most recent record. All if the other tables in the system would key themselves off of the MyData.ID for foreign key purposes.
This arrangement sidesteps the need for a second log table(and keeping them in sync when the schema changes), but at the cost of an extra join and some overhead when creating new records.
Do you need it to remain queryable, or will this just be for recovering from bad edits? If the latter, you could just set up a cron job to back up the actual files where MySQL stores the data and send it to a version control server.