sync database between wp8 app and server - windows-phone-8

I am creating a WP8 App.
I have a created a sqlite database in the isolated storage.
Now my data keeps updating and I want to regularly download the latest data from the server database and update the local database.
The database in the WP8 cannot be changed at the client side so there will be only 1 side data merging.
Which is the best way and service to use?

If you do not work with a large database, you might prefer to replace the device database and not worry about merging. This can be as simple as making an export of the server database, transferring it to the device and then importing it into the device database. The appropriate method of dumping the database on the server side is dependent on the type of database (e.g. mysqldump in the case of MySQL).
If you do work with a large database, or if you are struggling with bandwidth issues on the device, you might want to use a technique to detect differences. One of the easiest methods is change tracking on the database. All modifications can then be logged with an change_at timestamp. The device can then remember which is the last modification it contains, get the new entries, and replicate the changes locally (For in-depth detailed explanation, please provide more information of the server environment and data structure).

Related

Is it possible to fetch data from my website which is live to the simple mysql database in my system?

Suppose the users of my website uploads a pdf on my website which is live on the internet. Then is there a way that those files after being uploaded gets stored in my mysql database on my system(laptop) directly.
To refine more, would it matter if one uses mySql database on his local system(localhost) or on a live website to store data? , will the database fail to store data if the website is hosted online?
If the question is not clear to anyone in any sort please mention.
Thank you.
There are a lot of nuances to your question, and I'll try to address as many of them as I can.
I would not store files directly in the database. You certainly can, but in general you're going to get better performance and other ancillary benefits from storing files as a file in the file system. Store metadata in the database, including at the very least the file name and path on disk (perhaps you want to store more, like the uploader's account information, the size, a long-form text description, and so on, but at least store the path and filename). Then, in your application, fetch the filename from the database and serve the file instead of a database BLOB. One reason is that MySQL performance can really suffer if you don't do this properly.
Let's say you decide to defy my suggestion and store the file as some BLOB data in your database, how can you replicate that to your laptop? Your laptop isn't going to be powered on and connected to the internet all the time, in any case even if you had a server at home running 24 hours a day your hosting provider still should have better uptime than your home does. What should happen to the upload if you were hosting the database on your laptop, but your laptop was off (or rebooting for system updates)? So you should host the database at the hosting provider and somehow sync it to your local machine. MySQL provides several methods of this; replication, export and import of .sql files, or exporting binary logs. These each have tradeoffs that you'll want to consider depending on your needs.
But remember how I said you can get other ancillary benefits from storing the file on the file system directly? One of those is that you can rely on file transfer techniques to get the file to your local machine. SFTP, SCP, SyncThing, WebDAV, and any other way you can imagine transferring files can be used to get the remote file to your local system. You wouldn't automatically get the database metadata, but that didn't seem like much of a requirement from your question, so you'd have easy access to the file as uploaded, as quickly as you want.
So there are plenty of ways to accomplish this, and without more details on your question it's tough to recommend a solution, but you have plenty of options available.

MySQL update remote server from local server

I am after some advice please. I am no developer and outsource my work requirements to various freelancers. I have a specific requirement but due to my lack of skills I’m not quite sure what to ask for, hence my question here.
I have a system where i have several Raspberry Pi "drones" that collect data. These drones are all connected to the web and at present instantly send the data via a live feed direct to a MySQL server hosted at Amazon. This server is accessible via a static IP address.
Each drone is given a unique ID and the data collected is tagged with that ID so we know where it comes from.
The existing MySQL server collects and processes all this data and we have a website that displays the stats. Nothing really complicated and the current system works very well.
The issue i have is we occasionally have internet connection issues from the drones so i want to make the whole system more robust. When the drones do have a connection issue we lose data as the drone do not store anything which is what I want to resolve.
Just as a heads up… due to the data structure the drone will not write to a file, they have to feed direct to a MySQL server.
To resolve this issue my Plan is to have a MySQL server run on each RPI with the same table structure etc as the main server. Each RPI will write to its own local MySQL server and i then need that server to "update" the main server at Amazon. Please note the data will only ever be sent in this direction, it will never come from Amazon back to the drones. When the drone can communicate with the main server I would like the drone based MySQL server to communicate pretty much instantly ( or as close as i can get it ) but where there is an internet connection issue i need the drone to store its own data until the internet connection is restored at which point it will update the main server.
As i have said, i am no developer so i wouldn’t be undertaking this work myself but i would like to know what i need to ask for in order to get the right system.
If anyone can help i would appreciate some pointers. In addition if this is the type of work you could undertake please feel free to let me know and maybe we could talk further via PM, after all … someone needs to do it 
Many Thanks.
I recommend to use a schedule update to the Amazon Database, using the programming language that you are already using or whatever, something that looks like:
While(gattering data){
Store data into local MySQL
for(each record in local MySQL){
if(there is internet){
store record in remote MySQL
optional: read remote record to check data was correctly stored
delete record in local MySQL
}else{
break;
}
}
}

What is the best way to synchronize database via internet? MySQL

I want to build a desktop application somewhat like a POS. The user can enter data to it and save the data to a local database instead of accessing a remote database in the server. The reason I want to do this is to reduce the traffic and make my application more responsive since it will make less overhead of accessing a remote database.
I want to build at least 5 client of this desktop application and each of them has a local database. Along with these clients, I will setup a server database which I will use for reports or for online access that displays all the status and data of all my clients.
For example, a specific user uses the client machine, all of his data will be stored in the local database before it can be transported to the server database for synchronization. I would seem like this system doesn't give a real time data update to the server but this is what I need. Since the server database is only used for reporting purposes, there is no information in the server that is manipulated by a client.
You may looking for jumpmind product called SymmetricDS. It Sync data from branch offices with a central office, sharing data with all offices and syncing subsets of data with specific offices.

best practice: mysql remote mobile devices sync over 3G connection

currently we have one master mysql server that connect every 1 hour to 100 remote mobile devices [vehicles] over 3G connection [not very reliable: get disconnect daily while sync in progress for few cars]. the sync done through .net windows service tool. after checking the remote mysql status the master start perform the sync. sometimes the sync payload data is about 6-8 MB. the sync performed for one table only using non-transactional approach.
mysql server version in use is: 4.1.22
Questions:
is it useful to make the sync transactional knowing that only one table getting sync? or no value added!
the sync data loaded to remote machine using mysql statement:
LOAD DATA LOCAL INFILE
the file format is CSV. how i can send the data in compressed format? without developing tool that reside on the remote device.
is it good practice or architecture in the sync domain to deploy remote application that will perform the sync after sending the data or it should be done directly by the master? i mean the development of tool that will reside on remote machine will be difficult to update or fix in case new requirements appear. but it will save a lot of bandwidth for the sync operation and it will eliminate the errors that could raise from the live master sync in case disconnection occur while the sync is in-progress. so if this is recommend then only compressed data will be sent, then by using some sort of check-sum I'll verify that the whole data sent otherwise the request will be initiated again.
please share your thoughts and experience.
thanks,
Firstly, I would change the approach to a client inited sync vs a server inited sync. A many to one vs one to many approach will expand much easier than your current setup. My above comments give a few good examples of a required client to server syncing.
Secondly, Turn on transactional record entry. There is no reason not to have it. This will guarentee that the information gets entered in a timely fashion and will be able to possibly provide even more 'meta-data' (such as which clients are slow to update, etc...).
Lastly, you can 'enhance' this uploading by taking a different look at it. If you were to implement a sort of service at the server side that takes in a response via a POST from the client, you'd be able to send the data to the server side with no issues. It would be just like 'uploading' a file to a server. Once your 6-8 MB file is 'uploaded' it is then put into the database. The great thing about this is if your server is an APACHE (or even in your case an IIS server), you'd be able to have every single client uploading data at the same time without much of an issue. At that point, uploading to the mysql server via an insert would take virtually no time and your process would continue on without a problem.
This is the way I'd handle your situation...

How do you sync a data-base from a test server to production server

If you have a test server, and a production server. How do you sync the database?
Do you add the data in the test server first, just like you code in the test server?
How is that generally handled?
That depends on how coupled you can allow the two databases to be.
One configuration might be to have the two databases replicated with the production server in master configuration and the test server as slave. I would use caution with this method, unless you absolutely need the synchronization of live data with test infrastructure, don't go down this path.
Instead, if you want to keep the two instances separated, just replicate with mysqldumps.
I version control SQL dumps (as made by mysqldump). I find it's usually a nice portable format.
Some platforms have concepts like Django's Fixtures which dumps models data to a JSON or XML file so you can version control it to/from the server and load it up where you need it. Whether you have something like this available depends on your platform.
You're talking about getting a live version in your production environment. If I were you, I'd settle for near-live. Cron up a dump script on the server and either write a little script to pull it into your production environment or just rely on version control.
Anything more than near-live will likely be two-way and risk live data.
I use Navicat to sync my development and production servers. The Navicat product allows structure syncing (ie new fields) and data syncing (data). The main limitation is that you can't sync any tables that don't have a primary key. And its not always that fast if there are a lot of records to transfer.
WE do all database devvelopment inscripts including scripts to add records to lookup tables. these scripts are in source control and are versionied just like any other code. To send changes from dev to prod, we run the scripts for that version. This would include any changes needed to lookup type tables as well as table structure changes, sps, userdefined functions, views, etc. SInce it cannot be deplyed to prod without a script (we havea configuration managment team that does the deplying not developers), we have no trouble at all with people nort using scripts or source control.
To go back from prod to dev, we refesh last prod backup and then rerun the dev scripts which have not yet been promoted to prod.