I have a local app that uses SQLite. Whenever it has internet access it requests the whole database from the server and recreates the local one from that. Local and Server databases have the same structure, basically the point of the local one is to guarantee function even when no internet is available.
This is a very inefficient way of doing this.
My question is, how to ask for only data that is missing?
Should I send the last ID from each local table and have the server send data from that ID onward?
What happens if an existing ID was modified? This would mean that all data should be checked, but sending the whole database for checking and getting back the modifications or additions also seems stupid.
The configuration is Local SQLite, Server MySQL. I could probably change the server to SQLite if it's recommended.
EDIT:
Multiple clients make requests to the same server MySQL Database, PHP processes the request and replies.
How would you tackle this?
Thank you.
I'd either timestamp the rows in the database and fetch by date, or use rsync (or librsync or similar) to synchronize the database files.
Related
Our application uses a sql database for storing data which mustnt be modified by the user.
For now we are using a local sqlite db which is encrypted via sqlcipher and which gets decrypted on
application start with a private key set by us. This way the user cant modify any data without knowing
this key or even load the database in his favourite db browser.
We now want to allow for the database to be on a mysql server. But as far as i understand
an equal way of securing the data isnt possible. Especially because we want the user to be
able to host his own server (The same way as he used his "own" local sqlite file) I understand there is a so called "at rest" encryption for innodb in mysql now but this seems to be completely transparent to the user. So if the user connects to the db he doesnt have to enter a key for it to be decrypted but this will happen automatically for him in the background.
Is there a way to allow the user to use its own mysql server but prevent him from modifying
any database we create on it? Or is this only possible with a server we host ourselves?
Let me first give a short comment regarding the method you used until now.
I think that the concept has been wrong in the first place, because it is not secure. The decryption key has to be in the application because otherwise your users would not be able to open the database. As soon as the application runs, a user could extract that key from RAM using well-known methods / tools.
In contrast, when using a server in a locked room, you have real safety provided that the server software does not have bugs which allow users to attack it.
Thus, the answer to your question is:
Yes, it is wise to upgrade to MySQL.
Use one server for all users which physically is at a place where normal users don't have access to.
No, do not try to encrypt the MySQL table files on the disk if your only concern is that users shall not be able to change the data.
Instead, assign access privileges to your central database and tables properly. If the normal users have only read privilege on all tables, they will not have the chance to modify any data via network, but can read all data. As far as I have understood, this is what you want.
I am after some advice please. I am no developer and outsource my work requirements to various freelancers. I have a specific requirement but due to my lack of skills I’m not quite sure what to ask for, hence my question here.
I have a system where i have several Raspberry Pi "drones" that collect data. These drones are all connected to the web and at present instantly send the data via a live feed direct to a MySQL server hosted at Amazon. This server is accessible via a static IP address.
Each drone is given a unique ID and the data collected is tagged with that ID so we know where it comes from.
The existing MySQL server collects and processes all this data and we have a website that displays the stats. Nothing really complicated and the current system works very well.
The issue i have is we occasionally have internet connection issues from the drones so i want to make the whole system more robust. When the drones do have a connection issue we lose data as the drone do not store anything which is what I want to resolve.
Just as a heads up… due to the data structure the drone will not write to a file, they have to feed direct to a MySQL server.
To resolve this issue my Plan is to have a MySQL server run on each RPI with the same table structure etc as the main server. Each RPI will write to its own local MySQL server and i then need that server to "update" the main server at Amazon. Please note the data will only ever be sent in this direction, it will never come from Amazon back to the drones. When the drone can communicate with the main server I would like the drone based MySQL server to communicate pretty much instantly ( or as close as i can get it ) but where there is an internet connection issue i need the drone to store its own data until the internet connection is restored at which point it will update the main server.
As i have said, i am no developer so i wouldn’t be undertaking this work myself but i would like to know what i need to ask for in order to get the right system.
If anyone can help i would appreciate some pointers. In addition if this is the type of work you could undertake please feel free to let me know and maybe we could talk further via PM, after all … someone needs to do it
Many Thanks.
I recommend to use a schedule update to the Amazon Database, using the programming language that you are already using or whatever, something that looks like:
While(gattering data){
Store data into local MySQL
for(each record in local MySQL){
if(there is internet){
store record in remote MySQL
optional: read remote record to check data was correctly stored
delete record in local MySQL
}else{
break;
}
}
}
I am creating a WP8 App.
I have a created a sqlite database in the isolated storage.
Now my data keeps updating and I want to regularly download the latest data from the server database and update the local database.
The database in the WP8 cannot be changed at the client side so there will be only 1 side data merging.
Which is the best way and service to use?
If you do not work with a large database, you might prefer to replace the device database and not worry about merging. This can be as simple as making an export of the server database, transferring it to the device and then importing it into the device database. The appropriate method of dumping the database on the server side is dependent on the type of database (e.g. mysqldump in the case of MySQL).
If you do work with a large database, or if you are struggling with bandwidth issues on the device, you might want to use a technique to detect differences. One of the easiest methods is change tracking on the database. All modifications can then be logged with an change_at timestamp. The device can then remember which is the last modification it contains, get the new entries, and replicate the changes locally (For in-depth detailed explanation, please provide more information of the server environment and data structure).
I'm building a new rails app for a client. They already have a separate rails app that manages users (with all the standard Devise fields) and don't want to have to maintain users in both apps, which makes total sense.
I'm able to connect to their remote database using database.yml for the connection details and establish_connection: in my User model. It works, although is a bit slow (going over the public internet). I'm concerned that relying on this remote database for something that is queried A LOT will seriously slow down my app. I also won't be able to do joins with the remote database.
My thought is to duplicate the user table in my app and have a cron job that runs once every few hours (or even more frequently) that keeps my table in sync with the "master".
Is there any reason not to do that? Is it a terrible idea from a design perspective?
I should mention that my DB is postgres and the remote DB is mysql. I also started reading up on the DbCharmer gem (http://dbcharmer.net/) but I don't fully understand it yet.
--Edit:--
I should also mention that I will need to read other tables from the remote DB, not just the users table.
I would recommend caching their DB locally, so when you look up a remote record you record it locally (if it existed remotely) or you record a negative result locally if it didn't exist remotely - you cache a record of the remote record's absence. Remember to cache negative results for less time than positive results.
You can then look at your local cache and see if there's a fresh-enough result to return and only query the remote if the locally cached result is stale or there isn't a locally cached result.
This is how I'd do it personally; I'd cache rather than copy and sync. You can certainly combine the two approaches by pre-fetching commonly fetched things into the cache on a regular basis, though.
There's no need to use Pg for the local cache, you can just as easily use redis/memcached/whatever (and I'm a Pg dev, so I'm not exactly biased in favour of Redis).
currently we have one master mysql server that connect every 1 hour to 100 remote mobile devices [vehicles] over 3G connection [not very reliable: get disconnect daily while sync in progress for few cars]. the sync done through .net windows service tool. after checking the remote mysql status the master start perform the sync. sometimes the sync payload data is about 6-8 MB. the sync performed for one table only using non-transactional approach.
mysql server version in use is: 4.1.22
Questions:
is it useful to make the sync transactional knowing that only one table getting sync? or no value added!
the sync data loaded to remote machine using mysql statement:
LOAD DATA LOCAL INFILE
the file format is CSV. how i can send the data in compressed format? without developing tool that reside on the remote device.
is it good practice or architecture in the sync domain to deploy remote application that will perform the sync after sending the data or it should be done directly by the master? i mean the development of tool that will reside on remote machine will be difficult to update or fix in case new requirements appear. but it will save a lot of bandwidth for the sync operation and it will eliminate the errors that could raise from the live master sync in case disconnection occur while the sync is in-progress. so if this is recommend then only compressed data will be sent, then by using some sort of check-sum I'll verify that the whole data sent otherwise the request will be initiated again.
please share your thoughts and experience.
thanks,
Firstly, I would change the approach to a client inited sync vs a server inited sync. A many to one vs one to many approach will expand much easier than your current setup. My above comments give a few good examples of a required client to server syncing.
Secondly, Turn on transactional record entry. There is no reason not to have it. This will guarentee that the information gets entered in a timely fashion and will be able to possibly provide even more 'meta-data' (such as which clients are slow to update, etc...).
Lastly, you can 'enhance' this uploading by taking a different look at it. If you were to implement a sort of service at the server side that takes in a response via a POST from the client, you'd be able to send the data to the server side with no issues. It would be just like 'uploading' a file to a server. Once your 6-8 MB file is 'uploaded' it is then put into the database. The great thing about this is if your server is an APACHE (or even in your case an IIS server), you'd be able to have every single client uploading data at the same time without much of an issue. At that point, uploading to the mysql server via an insert would take virtually no time and your process would continue on without a problem.
This is the way I'd handle your situation...