best practice: mysql remote mobile devices sync over 3G connection - mysql

currently we have one master mysql server that connect every 1 hour to 100 remote mobile devices [vehicles] over 3G connection [not very reliable: get disconnect daily while sync in progress for few cars]. the sync done through .net windows service tool. after checking the remote mysql status the master start perform the sync. sometimes the sync payload data is about 6-8 MB. the sync performed for one table only using non-transactional approach.
mysql server version in use is: 4.1.22
Questions:
is it useful to make the sync transactional knowing that only one table getting sync? or no value added!
the sync data loaded to remote machine using mysql statement:
LOAD DATA LOCAL INFILE
the file format is CSV. how i can send the data in compressed format? without developing tool that reside on the remote device.
is it good practice or architecture in the sync domain to deploy remote application that will perform the sync after sending the data or it should be done directly by the master? i mean the development of tool that will reside on remote machine will be difficult to update or fix in case new requirements appear. but it will save a lot of bandwidth for the sync operation and it will eliminate the errors that could raise from the live master sync in case disconnection occur while the sync is in-progress. so if this is recommend then only compressed data will be sent, then by using some sort of check-sum I'll verify that the whole data sent otherwise the request will be initiated again.
please share your thoughts and experience.
thanks,

Firstly, I would change the approach to a client inited sync vs a server inited sync. A many to one vs one to many approach will expand much easier than your current setup. My above comments give a few good examples of a required client to server syncing.
Secondly, Turn on transactional record entry. There is no reason not to have it. This will guarentee that the information gets entered in a timely fashion and will be able to possibly provide even more 'meta-data' (such as which clients are slow to update, etc...).
Lastly, you can 'enhance' this uploading by taking a different look at it. If you were to implement a sort of service at the server side that takes in a response via a POST from the client, you'd be able to send the data to the server side with no issues. It would be just like 'uploading' a file to a server. Once your 6-8 MB file is 'uploaded' it is then put into the database. The great thing about this is if your server is an APACHE (or even in your case an IIS server), you'd be able to have every single client uploading data at the same time without much of an issue. At that point, uploading to the mysql server via an insert would take virtually no time and your process would continue on without a problem.
This is the way I'd handle your situation...

Related

PHP, MySQL - Storing data locally vs Fetching from remote every time

I have a dashboard application that has PHP backend and javascript frontend. Data is read from multiple sources and I have access to databases of all the sources.
While designing the application, is it a good idea to store remote data locally instead of hitting the remote database everytime the application has a request?
Store locally? Reason being the data is not live. I can write a cron to run in the background to update the data every 5 min and the application always will read the data from local DB thereby giving faster load times.
Read from remote every time? Since I have direct database access to all these remote DB's, I do not notice any performance gain of storing data locally over fetching from remote everytime.
Which approach scales better?
What you're describing is called "caching." It's a common optimization.
Fetching data remotely is much more expensive than getting it out of a local cache.
You should learn the Latency Numbers Every Programmer Should Know.
The tricky part of caching is knowing when you need to discard the local cached copy of data and re-fetch it from the remote database. This is a hard problem with no single answer.
There's an old joke attributed to Phil Karlton:
“There are only two hard things in Computer Science: cache invalidation and naming things.”

sync database between wp8 app and server

I am creating a WP8 App.
I have a created a sqlite database in the isolated storage.
Now my data keeps updating and I want to regularly download the latest data from the server database and update the local database.
The database in the WP8 cannot be changed at the client side so there will be only 1 side data merging.
Which is the best way and service to use?
If you do not work with a large database, you might prefer to replace the device database and not worry about merging. This can be as simple as making an export of the server database, transferring it to the device and then importing it into the device database. The appropriate method of dumping the database on the server side is dependent on the type of database (e.g. mysqldump in the case of MySQL).
If you do work with a large database, or if you are struggling with bandwidth issues on the device, you might want to use a technique to detect differences. One of the easiest methods is change tracking on the database. All modifications can then be logged with an change_at timestamp. The device can then remember which is the last modification it contains, get the new entries, and replicate the changes locally (For in-depth detailed explanation, please provide more information of the server environment and data structure).

VB6 Database operations slow when multiple users connect to the database

I am currently using VB6 to connect to a MS access DB using DAO and I’m experiencing a very noticeable speed reduction when a 2nd user connects to the Database.
Here are the steps to reproduce:
Open the Database from computer A by logging into the software
Add records to the database via the software (takes about .4 seconds)
A second user logs into the software (Computer B), ie: this opens the database, displays todays transactions, but the user does nothing else
On Computer A, repeat the operation of adding records, now the operation takes approximately 6 seconds
Further info…
the operation continues to take aprox 6 seconds, even after Computer B logs out of the software
if you close and reopen the application from Computer A the operation returns to taking only .4 seconds to execute!
Any help would be greatly appreciated!
Thanks!
That is the way MS Access works. While it kind of supports multiple users, and kind of supports placing the DB on a file share so multiple PCs can access it, it does neither really well. And if you are doing both (multi-user and over a network to a file share) then I feel for your pain.
The answer is to run the upgrade wizard and convert this to an MS SQL Server instance. MS SQL Server Express edition is a good choice to replace Acess in the case. Note that you can still keep all of your code and reports etc you have in Access, only the data needs to be moved.
Just to be clear on the differences, in MS Access when you read data from the database, all of the data required to perform your query is read from a file by your program, no server-side processing is done. If that data resides on a network, you are pulling that data across your network. If there are multiple users, you have an additional overhead of locking. Each users program/process effectively dialogs with the program/process of the other users via file I/O (writing lock info into the networked file or files). And if the network I/O times out or has other issues then those files can become corrupted.
In SQL Server, it is the SQL Server engine that manages the data requests and only returns the data required. It also manages the locks and can detect when a client has disconnected or timed out to clean up, which reduces issues caused by multiple users on a network.
We had this problem with our VB3 / Jet DB 2.5 application when we transitioned to using newer file servers.
The problem is "opportunistic locking" : http://support.microsoft.com/kb/296264?wa=wsignin1.0
Albert is probably describing the same thing ; the server will permit one client exclusive access of a file, but when another chimes in, this exclusive access will "thrash" between them, causing delays as the client with the oplock flushes all it's local cache to the server before the other client can access the file.
This may also be why you're getting good performance with one client - if it takes an oplock, it can cache all the data locally.
This can also cause some nasty corruption if one of your clients has a power failure or drops off the network, because this flushing of the local cache to the server can be interrupted.
You used to be able to disable this (on the client - so you need to service ALL the clients) on Windows 2000 and XP as per the article, but after Vista SP2 it seems to be impossible.
The comments about not using Access / JetDB as a multi-user database are essentially correct - it's not a good architectural choice, especially in light of the above. DAO is also an obsolete library, even in the obsolete VB6. ADODB is a better choice for VB6, and should allow you some measure of database independence depending on how your app is written.
Since as you pointed out you get decent performance with one user on the system, then obviously your application by nature is not pulling too much data over the network, and we can't blame network speed here.
In fact what is occurring is the windows file share system is switching from single file share mode into multi-share file mode. This switching file modes causes a significant delay. And this also means that the 2nd or more user has to attempt to figure out and setup locks on the file.
To remove this noticable delay simply at the start of your application open what we call a persistent connection. A persistent connection is simply something that forces the network connection to remain open at all times, and therefore this significant delay in switching between two file modes for file share is eliminated. You now find that performance with two users should be the same as one (assuming one user is idle and not increasing network load). So at application startup time, open a back end table to a global var and KEEP that table open at all times.

Database synchronization Server to Local

I have a local app that uses SQLite. Whenever it has internet access it requests the whole database from the server and recreates the local one from that. Local and Server databases have the same structure, basically the point of the local one is to guarantee function even when no internet is available.
This is a very inefficient way of doing this.
My question is, how to ask for only data that is missing?
Should I send the last ID from each local table and have the server send data from that ID onward?
What happens if an existing ID was modified? This would mean that all data should be checked, but sending the whole database for checking and getting back the modifications or additions also seems stupid.
The configuration is Local SQLite, Server MySQL. I could probably change the server to SQLite if it's recommended.
EDIT:
Multiple clients make requests to the same server MySQL Database, PHP processes the request and replies.
How would you tackle this?
Thank you.
I'd either timestamp the rows in the database and fetch by date, or use rsync (or librsync or similar) to synchronize the database files.

Mysql remote synch

We currently have an application located on a remote server, and our call center uses this application to perform customer transactions.
We plan to setup asterisk on a local server to help us with all the call routing and recording, for asterisk to work smoothly we have to move our application from the remote server to the local.
Its will be easy to mover all data to the local server and do transactions locally, but there is an option for users to do transactions online too which will hit the remote server database.
The reason we still have the remote application because of the reliable infrastructure and backup solution provided by rackspace.
If we move application to local server i am looking at a reliable solution for syncing remote and local databases so that we can handle local as well as online transactions.
Why not use mysql master-master replication and hold definitive data at both ends? (Note you'll have to do some reading on on auto_increment_increment and auto_increment_offset)
symcbean's answer is basically correct. I'd add this article as a good starting place to understand master-master replication. I'd further recommend High Performance MySQL as a good reference for a deeper understanding of the techniques and issues.
There are some issues that you will have to face doing writes to two non-colocated MySQL servers. You'll have replication lag to deal with, so the databases won't necessarily be completely in sync, but will only be "eventually consistent". Also, if you have both sides doing updates on content, you can end up with data integrity issues. If your system leans towards INSERTs more then UPDATES for the write operations, it is less likely that you'll run into issues. Also, if the subset of data that is likely to be modified tends to be localized around one or the other of the servers, you'll run into fewer issues.
Otherwise, you'll probably want to roll your own solution that is designed towards the specific use cases of your application.