Can I keep websql database open to improve performance? - html

I have an HTML5 mobile app running on iOS and Android. Users will normally have a little bit of local data stored in a few tables. Let's say five tables with an average of three records.
Performance of websql is really bad. I read in this post that much of the delay is probably in opening and closing the database for each transaction. My users will normally only do one transaction at a time, so the time needed to open and close the database for each operation will usually be a relatively big chunk of total time needed.
I am wondering if I could just open the database once, dispense with all the transaction wrappers and just execute the sql straight away?
The table is never used by any other person or process than the user updating their data, or the app reading the data after an update and sending the data to a server for calculations and statistics.
Most crucially: if I follow the above strategy, and the database is never closed, but the user or the OS closes the app (properly speaking: the webview), will the changed data persist or be lost?

Okay, I found the problem. I use the persistenceJS framework to deal with the local database. This keeps a copy of the websql data stored in a js object and keeps database and js object in sync. That's a process that takes a while, and I was putting everything in the "flush" handler, which comes after the sync.

I also keep the connection open. For IndexedDB, I could keep open on UI and background thread at the same time without observing problem. I believe WebSQL will be the same. If you are using just JS file, you could try out my own javascript library, it is very thin wrapper for both IndexedDB and WebSQL. But the library is written for IndexedDb style.

Related

mysql db heavy load and numerous connection

In my work I need to revamp the web which need to accept numerous connection always. Before I use the JSON to get the data until now.But now I want to direct call the DB and get the data. As I know use cache is the best way for my web. But in initial the concurrent access to DB is often happen.Any advice for me to handle the situation. Because I want the web that can get the updated data always.
Thanks.
Following are my suggestions
If you want to use cache, you have to automate your cache clear process whenever there is an update in the particular data you hit. But this is practically possible if your data is updated infrequently.
If your budget allows, Put your DB in a cluster (Write in master and read from master&slave)
In worst case,ensure your db is properly indexed.

Reliability Android when connection is off

I'm developing an App where I store my data in a DB online using HTTP POSTO and GET.
I need to implement some reliability to my software, so if the user presses the button, and there is no connection, the data should be stored in something (file? sqlite?) and then when the connection is again on, send the HTTP request to send data.
Any advices or pieces of code to show me how to do this?
Thanks.
Sounds good and pretty forward for me. Just go.
You use a local sqlite db as "cache". To keep it simple, do not implement any logic about that into your apps normal code. Just use the local db. Then, separately, you code a synchronizer. That one checks for the online connection and synchronizes the the local sqlite database with a remote database, maybe mysql.
This should be perfectly fine for all applications that to not require immediate exchange of the data with other processes all the time.
There is one catch, though: the low performance of sqlite on bigger data sets. That is an issue with all single file database solutions. So this approach probably is only valid for small data sets in total, or if you can reduce the usage of the local database to only a part of the total data, maybe only the time critical stuff.
Another workaround might be to use joins over two separate databases, the local and the remote one. But such things really boost the complexity of code, so think thrice if that really is required.

iOS - Core Data and Server Database Synchronization Best Practices [duplicate]

This question already has answers here:
Client-server synchronization pattern / algorithm?
(7 answers)
Closed 9 years ago.
I am starting to setup the core data model for a large scale app, and was hoping for some feedback on proper synchronization methods/techniques when it comes to server database and offline capabilities.
I use PHP and mySQL for my web server / database.
I already know how to connect, receive data, store to core data, etc etc. I am looking more for help with the methodologies and particular instances of tracking data changes to:
A) Ensure the app and server are in sync during online and offline use (i.e. offline activity will get pushed up once back online).
B) Optimize the speed of saving data to the app.
My main questions are:
What is the best way to check what new/updated data in the app still needs to be synchronized (after offline use)?
(i.e. In all my Core Data Entities I put a 'isSynchronized' attribute of BOOL type. Then update to 'YES' once successfully submitted and response is sent back from server). Is this the best way?
What is the best way to optimize speed of saving data from server to core data?
(i.e. How can I only update data in Core Data that is older than what is on server database without iterating through each entity and just updating every single time)? Is it possible without adding a server database column for tracking update timestamps to EVERY table?
Again, I already know how to download data and store it to Core Data, I am just looking for some help with best practices in ensuring synchronization across app and server databases while ensuring optimized processing time.
I store a last modified timestamp in the database on both the core data records on the phone, and the mysql tables on the server.
The phone searches for everything that has changed since the last sync and sends it up to the server along with a timestamp of the last sync, and the server responds with everything that has changed on it's end since the provided sync timestamp.
Performance is an issue when a lot of records have changed. I do the sync on a background NSOpeartion which has it's own managed object context. When the background thread has finished making changes to it's managed object context, there is an API for merging all of the changes into the main thread's managed object context - which can be configured to simply throw away all the changes if there are any conflicts caused by the user changing data while the sync is going on. In that case, I just wait a few seconds and then try doing a sync again.
On older hardware even after many optimisations it was necessary to abort the sync entirely if the user starts doing stuff in the app. It was simply using too many system resources. I think more modern iOS devices are probably fast enough you don't need to do that anymore.
(by the way, when I said "a lot of records have changed" I meant 30,000 or so rows being updated or inserted on the phone)

Database update outside application

am I correct assuming that if a different process updates the DB then my NHibernate powered application will be out-of-sync? I'm almost using non-lazy update.
My target DB is mysql 5.0, if it makes any difference.
There isn't a simple way to answer that without more context.
What type of application are you thinking about (web, desktop, other)?
What do you think would be out of sync exactly?
If you have a desktop application with an open window with an open session that has data loaded and you change the same entities somewhere else, of course the DB will be out of sync, but you can use Refresh to update those entities.
If you use NH second-level caching and you modify the cached entities somewhere else, the cache contents will be out of sync, but you can still use Refresh or cache-controlling methods to update directly from the DB.
In all cases, NH provides support for optimistic concurrency by using Version properties; those prevent modifications to out-of-sync entities.
Yes, the objects in your current session will be out of sync, the same way a DataSet/DataTable would be out of sync if you fetch it and another process updates the same data.

SQLite concurrency issue a deal breaker?

I am looking at databases for a home project (ASP.NET MVC) which I might host eventually. After reading a similar question here on Stack Overflow I have decided to go with MySQL.
However, the easy of use & deployment of SQLite is tempting, and I would like to confirm my reasons before I write it off completely.
My goal is to maintain user status messages (like Twitter). This would mean mostly a single table with user-id/status-message couples. Read / Insert / Delete operation for status message. No modification is necessary.
After reading the following paragraph I have decided that SQLite can't work for me. I DO have a simple database, but since ALL my transaction work with the SAME table I might face some problems.
SQLite uses reader/writer locks on the entire database file. That means if any process is reading from any part of the database, all other processes are prevented from writing any other part of the database. Similarly, if any one process is writing to the database, all other processes are prevented from reading any other part of the database.
Is my understanding naive? Would SQLite work fine for me? Also does MySQL offer something that SQLite wouldn't when working with ASP.NET MVC? Ease of development in VS maybe?
If you're willing to wait half a month, the next SQLite release intends to support write-ahead logging, which should allow for more write concurrency.
I've been unable to get even the simple concurrency SQLite claims to support to work - even after asking on SO a couple of times.
Edit
Since I wrote the above, I have been able to get concurrent writes and reads to work with SQLite. It appears I was not properly disposing of NHibernate sessions - putting Using blocks around all code that created sessions solved the problem.
/Edit
But it's probably fine for your application, especially with the Write-ahead Logging that user380361 mentions.
Small footprint, single file installation, fast, works well with NHibernate, free, public domain - a very nice product in almost all respects!