I have an app that stores locally some data in sqlite,
I have a server that stores the same data in a mysql database,
Both tables have a timestamp column that indicates the time it was edited.
What i want to do is sync the data so it matches,
So if another devices changes the central data on the server, it is pushed down to all devices.
Currently i achieve this by ...
In the app i store the time i last made a server read.
I ask the server for all data that has changed since ...
I make a read about every 30 seconds
My issue, what happens when the clocks change (this will potentially cause issues)
What is the standard way of achieving what i want, the project is very early in development so i can change if there is a much better way of achieving that i want.
Thanks
Related
I want to export updated data from MySQL/postgreSQL to mongodb every time specified table has changed or, if that's impossible, make the dump of whole table to NoSQL every X seconds/minutes. What can I do to achieve this? I've googled and I found only paid, enterprise level solutions and those are out of reach for my amateur project.
SymmetricDS provides and open source database replication option that would that has support for replicating a RDMS database (MySQL, Postgres) into MongoDB.
Here is the specific documentation to setup the Mongo target node in SymmetricDS.
http://www.symmetricds.org/doc/3.11/html/user-guide.html#_mongodb
There is also a blog about setting up Mongo in a bit more detail.
https://www.jumpmind.com/blog/mongodb-synchronization
To get online replication into a target database you can use:
Get the data stream at the same time in both databases
Enterprise solution which reads the transaction log and pushes the data to the next database
Check periodically for change dates > X
Export table periodically
Write changed records to a certain table with trigger and poll this table to select the changes
Push the changed data with triggers in a datastreamservice into the next database
Many additional approaches
Depending on the amount of time you want to use and the lag the data can have it depends which solution fits for your demand.
If the amount of data gets bigger or the number or transaction increases some solutions which fit for an amateur project don't fit anymore.
So I've recently ported over a bunch of very large databases from SQL Server to MySQL using the migration wizard. This was done manually and was very time consuming. My next step is to automate this process in some way. As the databases that are a part of MSSqlServer are constantly being updated, I need to track these changes in MySQL as well in a short amount of time. My problem is that through the migration wizard, it takes many days to port all the data over. If there is one small change, it would be beneficial to simply track the change and change it in MySQL rather than reporting the entire database every week or so.
There's a few routes I thought of but I'm not sure if these will work or if they're practical or not.
Convert the migrations into scripts and run these scripts every week. This would allow me a route into automating the process. The problem however, is that it would take several days to retransfer all the data so it's not very practical.
Somehow link MySQL to MSSQLServer and track changes live on MySQL. I don't know if this is possible but it seems like it would be the most practical way to do this.
Any suggestions or help on solving this problem would be appreciated.
TL;DR: I need to track small changes in data from a MSSQLServer database to MySQL database without having to retransfer all the data via the migration wizard each time.
I am building an application in VB.Net using MySQL databases on both the clients and server. I will have multiple clients (say 10 to 20) using the system at a time. As clients do work locally, they will need to send their data up to the central server. They will also get back any data that has been changed by any other client since they last checked. So all clients will have all the same data.
I am looking for a system that will speed up the sync process. Basically I need each client to insert any new data they have into the central db, and make updates (on the central db) to any existing data that they have changed locally. They also need to get any new records that exist on the central db that they don't currently have and update their local records with the ones that were changed on the central db by other clients.
I am currently using a bit field to identify the clients. Whenever a client changes something on the central db, it changes their bit. This way, when a client sync's with the server, it only gets records that have their bit set to 0. Then I make the changes locally and reset their bit back to 1.
This works fine and fast except that it is limiting. I can only have a set number of total clients (64 to be exact) and the more clients I have the slower the syncing because of the increased size of the bit field. I don't expect to ever have any more than about 30 clients at a time, but I really don't like the idea that it has a maximum.
I can check dates on records instead of bits when syncing, but this is very slow. However, it has no limit on the number of clients.
Does anyone know of a good way to do this that is both fast and unlimited?
Thanks.
Just read and write directly to the central database? If required you can just work on the data using a disconnected dataset:
http://www.vbmigration.com/BookChapters/ProgrammingVBNET_Chap21.pdf
So I'm going to attempt to create a basic monitoring tool in VB.net. Now I'd like some advice on how basically to tackle the logging and reporting side of things so I'd appreciate some responses from users who I'm sure have a better idea than me and can tell me far more efficient ways of doing things.
So my plan is to have a client tool, which will read from a MySQL database values and basically change every x interval, I'm thinking 10/15 minutes at the moment. This side of the application is quite easy, I mean I can get something to read a database every x amount of time and then change labels and display alerts based on them. - This is all well documented and I am probably okay with that.
The second part is to have a client that sits in the system tray of the server gathering the required information. Now the system tray part I think will probably be the trickiest bit of this, however that's not really part of my question.
So I assume I can use the normal information gathering commands and store them perhaps as strings and I can then connect to the same database and add them to the relevant fields. For example if I had a MySQL table called "server" and a column titled "Connection" I could check if the server has an internet connection for example and store the result as the value 1 for yes and 0 for no and then send a MySQL command to the table to update the "connection" value to either 0/1.
Then I assume the monitoring tool I can run a MySQL query to check the "Connection" column and if the value is = 0 change a label or flag an error and if 1 report that connectivity is okay?
My main questions about the above are listed below.
Is using a MySQL database the most efficient way of doing something like this?
Obviously if my database goes down there's no more reporting, I still think that's a con I'll have to live with though.
Storing everything as values within the code is the best way to store my data?
Is there anything particular type of format I should use in the MySQL colum, I was thinking maybe tinyint(9)?
Is the above method redundant and pointless?
I assume all these database connections could cause some unwanted server load, however the 15 minute refresh time should combat that.
Is there a way to properly combat delays with perhaps client updating not in time for the reporter so it picks up false data, perhaps a fail safe for a column containing last updated time?
You probably don't need the tool that gathers information per se. The web app (real time monitor) can do that, since the clients are storing their information in the same database. The web app can access the database every 15 minutes and display the data, without the intermediate step of saving it again. This will provide the web app with the latest information instead of a potential 29-minute delay.
In other words, the clients are saving the connection information once. Don't duplicate it in the database.
MySQL should work just about as well as anything.
It's a bad idea to hard code "everything". You can use application settings or a MySQL table if you need to store IPs, etc.
In an application like this, the conversion will more than offset the data savings of a tinyint. I would use the most convenient data type.
This question already has answers here:
Client-server synchronization pattern / algorithm?
(7 answers)
Closed 9 years ago.
I am starting to setup the core data model for a large scale app, and was hoping for some feedback on proper synchronization methods/techniques when it comes to server database and offline capabilities.
I use PHP and mySQL for my web server / database.
I already know how to connect, receive data, store to core data, etc etc. I am looking more for help with the methodologies and particular instances of tracking data changes to:
A) Ensure the app and server are in sync during online and offline use (i.e. offline activity will get pushed up once back online).
B) Optimize the speed of saving data to the app.
My main questions are:
What is the best way to check what new/updated data in the app still needs to be synchronized (after offline use)?
(i.e. In all my Core Data Entities I put a 'isSynchronized' attribute of BOOL type. Then update to 'YES' once successfully submitted and response is sent back from server). Is this the best way?
What is the best way to optimize speed of saving data from server to core data?
(i.e. How can I only update data in Core Data that is older than what is on server database without iterating through each entity and just updating every single time)? Is it possible without adding a server database column for tracking update timestamps to EVERY table?
Again, I already know how to download data and store it to Core Data, I am just looking for some help with best practices in ensuring synchronization across app and server databases while ensuring optimized processing time.
I store a last modified timestamp in the database on both the core data records on the phone, and the mysql tables on the server.
The phone searches for everything that has changed since the last sync and sends it up to the server along with a timestamp of the last sync, and the server responds with everything that has changed on it's end since the provided sync timestamp.
Performance is an issue when a lot of records have changed. I do the sync on a background NSOpeartion which has it's own managed object context. When the background thread has finished making changes to it's managed object context, there is an API for merging all of the changes into the main thread's managed object context - which can be configured to simply throw away all the changes if there are any conflicts caused by the user changing data while the sync is going on. In that case, I just wait a few seconds and then try doing a sync again.
On older hardware even after many optimisations it was necessary to abort the sync entirely if the user starts doing stuff in the app. It was simply using too many system resources. I think more modern iOS devices are probably fast enough you don't need to do that anymore.
(by the way, when I said "a lot of records have changed" I meant 30,000 or so rows being updated or inserted on the phone)